report
stringlengths
320
1.32M
summary
stringlengths
127
13.7k
The HUBZone Act of 1997 (which established the HUBZone program) identified HUBZones as (1) qualified census tracts, which are determined by area poverty rate or household income; (2) qualified nonmetropolitan counties, which are determined by area unemployment rate or median household income; and (3) lands meeting certain criteria within the boundaries of an Indian reservation. Congress subsequently expanded the criteria for HUBZones to add former military bases and counties in difficult development areas outside the continental United States. To be certified to participate in the HUBZone program, a firm must meet the following criteria: when combined with its affiliates, be small by SBA size standards; be at least 51 percent owned and controlled by U.S. citizens; have its principal office—the location where the greatest number of employees perform their work—in a HUBZone; and have at least 35 percent of its employees reside in a HUBZone. SBA recertifies firms (that is, determines that firms continue to meet HUBZone eligibility requirements to participate in the program) every 3 years. As of August 2016, SBA had taken some actions to address but had not yet fully implemented our recommendation on better informing firms about programmatic changes that could affect their eligibility. In our February 2015 report, we described how HUBZone designations can change with some frequency. SBA generally updates HUBZone designations at least twice a year based on whether they meet statutory criteria (such as having certain income levels or poverty or unemployment rates). SBA generally uses data from other federal agencies to determine if areas still qualify for the HUBZone program. As a result of the updates, additional areas are designated for inclusion while other areas lose their designation. Areas that lose their designation begin a 3-year “redesignation” period during which firms in those areas can continue to apply to and participate in the program and receive contracting preferences. After the 3 years, firms in these areas lose their HUBZone certified firm status and the associated federal contracting award preferences. In 2015, we reported that 17 percent (871) of firms certified at the time were located in a redesignated area. However, we found that SBA’s communications to firms about programmatic changes (including redesignation) generally had not been targeted or specific to firms that would be affected by the changes. In 2015, we found that SBA used a broadcast e-mail (which simultaneously sends the same message to multiple recipients) to distribute program information. According to SBA officials, the e-mail list initially included all certified firms, but firms certified since the list was created in 2013 and up to the time period covered by our 2015 report had not been automatically added to the list. Firms had to sign up through SBA’s website to receive the e-mails. As a result, not all certified firms may have done so. Consequently, we recommended that SBA establish a mechanism to better ensure that firms are notified of changes to HUBZone designations that may affect their participation in the program. This recommendation was intended to address communications to all certified firms, whether newly certified or in the program for years. In response to the recommendation, SBA has improved notifications to newly certified firms. As we reported in March 2016, SBA revised its certification letters to firms. If SBA identifies during an application review that a firm’s principal office is in a redesignated area, it indicates in the certification letter that the firm is in a redesignated area, explains the implications of the designation, and notes when the redesignated status will expire. However, we found in March 2016 that SBA had not yet implemented changes to ensure that all currently certified firms are notified of changes that could affect their program eligibility. It is important that all certified firms potentially affected by such changes receive information about the changes or are made aware in a timely fashion of any effects on their program eligibility. As of August 2016, SBA had plans to improve its notifications to all firms. SBA recently hired an employee whose responsibilities include helping SBA update its e-mail distribution list. As part of this effort, according to SBA officials, SBA plans to collect all the e-mail addresses for certified firms from its Dynamic Small Business Search database to create a new distribution list. SBA plans to begin adding newly certified firms to the list quarterly. Additionally, SBA officials told us that the agency intends to develop a technology solution similar to SBA One—a database now used to process loan applications—to include the HUBZone program to help collect information and documents from existing firms and address this recommendation. SBA expects to implement this solution by spring 2017. We found in February 2015 that SBA had addressed weaknesses in its certification process that we previously identified. However, as of August 2016, SBA had not yet taken steps to fully address our recommendation related to the HUBZone firm recertification process. In February 2015, we reported that SBA had changed its certification process to require all applicant firms to provide documentation supporting their eligibility and to require agency staff to perform a full document review to determine firms’ eligibility for the program. Additionally, SBA had conducted site visits on 10 percent of its portfolio of certified firms every year in response to a prior GAO June 2008 recommendation. However, we also found deficiencies relating to the recertification process. First, in 2008 and again in 2015, we found that the recertification process had become backlogged—that is, firms were not being recertified within the 3-year time frame. As of September 2014, SBA was recertifying firms that had been first certified 4 years previously. While SBA initially eliminated the backlog following our 2008 report, according to SBA officials the backlog recurred due to limitations with the program’s computer system and resource constraints. Second, in 2015 we found that SBA relied on firms’ attestations of continued eligibility and generally did not request supporting documentation. SBA only required firms to submit a notarized recertification form stating that their eligibility information was accurate. SBA officials did not believe they needed to request supporting documentation from recertifying firms because all firms in the program had undergone a full document review, either at initial application or during SBA’s review of its legacy portfolio in fiscal years 2010–2012. As a result, we concluded in 2015 that SBA lacked reasonable assurance that only qualified firms were allowed to continue in the HUBZone program and receive preferential contracting treatment. Consequently, we recommended that SBA reassess the recertification process and implement additional controls, such as developing criteria and guidance on using a risk-based approach to requesting and verifying firm information, allowing firms to initiate the recertification process, and ensuring that sufficient staff would be dedicated to the effort so that significant backlogs would not recur. In response to the recommendation, SBA made some changes to its recertification process. For example, instead of manually identifying firms for recertification twice a year, SBA automated the notification process, enabling notices to be sent daily for firms to respond to and attest that they continued to meet the eligibility requirements for the program. According to SBA officials, this change should ultimately help eliminate the backlog by September 30, 2016. However, as we discussed in our March 2016 report, SBA had not implemented additional controls (such as guidance for when to request supporting documents) for the recertification process because SBA officials believe that any potential risk of fraud would be mitigated by site visits to firms. The officials also cited resource limitations. Based on data that SBA provided, the agency visited about 10 percent of certified firms each year during fiscal years 2013–2015. SBA’s reliance on site visits alone would not mitigate the recertification weaknesses that were the basis for our recommendation. In recognition of SBA’s resource constraints, we said in our 2015 report and reiterated in 2016 that SBA could apply a risk-based approach to its recertification process to review and verify information from firms that appear to pose the most risk to the program. A lack of risk-based criteria and guidance for staff to request and verify firm information during the recertification process increases the risk that ineligible firms obtain HUBZone contracts. And as we stated in 2015 and reiterated in 2016, the characteristics of firms and the status of HUBZone areas—the bases for program eligibility—often can change, and need to be monitored. SBA officials told us that the agency intends to implement a technology- based solution similar to SBA One to address some of the ongoing challenges with the recertification process by spring 2017. The officials expect that the new solution will help them better assess firms and implement risk-based controls. As we reported in February 2015, potential changes to HUBZone designation criteria could be designed to provide additional economic benefits to some communities. However, changes that benefit some communities also could, through competitive market processes, reduce activity by HUBZone firms in existing HUBZones. Likewise, if the potential changes significantly increased the number of HUBZones, new areas could realize economic benefits. However, such changes also could result in diffusion—decreased targeting of areas of greatest economic distress—by lessening the competitive advantage on which small businesses may rely to thrive in economically distressed communities. An analysis we performed for our February 2015 report offers examples of the scope of the differences in economic conditions among HUBZone areas (qualified areas), redesignated areas, and non-HUBZone areas (nonqualified tracts or areas) . We analyzed the economic conditions of such areas as of 2012 and found that indicators for redesignated areas on average fell between those of qualified and non-qualified areas. For example, as shown in figure 1, qualified census tracts had poverty and unemployment rates of 32 percent and 14 percent, respectively; redesignated tracts had rates of 24 percent and 12 percent, respectively; and nonqualified tracts had rates of 11 and 8 percent, respectively. A similar pattern existed for nonmetropolitan counties. Therefore, while allowing redesignated areas with certified firms to remain eligible can generate economic benefits for such areas, such inclusion could limit the benefits realized by qualified areas with more depressed economic conditions. In our 2015 report, we explored the potential impact of altering some of the criteria used to designate HUBZones. We examined changes to thresholds for unemployment rate and for the number of census tracts that could qualify for the program in a given metropolitan area. For example, one way a nonmetropolitan county can qualify as a HUBZone is based on its unemployment rate. More specifically, the unemployment rate must be 140 percent or more of the average unemployment rate for the United States or for the state in which the county is located, whichever is less. Under the current definition, two counties in different states with the same unemployment rate would not necessarily both qualify as HUBZones, depending on the unemployment rate of the state in which they are located. In general, every county in a state with an unemployment rate less than the U.S. average would qualify as a HUBZone if its unemployment rate was at least 140 percent of the state’s (even if it was less than the U.S. average). In contrast, counties in states with unemployment rates higher than the U.S. average must have an unemployment rate at least equal to 140 percent of the U.S. average to qualify as a HUBZone. Our application of hypothetical changes to the unemployment rate generally resulted in approximately the same number of areas qualifying compared to the current definition with two exceptions —applying the lowest unemployment rate to all states resulted in approximately four times as many counties qualifying, while applying the highest unemployment rate resulted in approximately eight times fewer counties qualifying (see table 1). Similarly, we analyzed the potential impact of removing the limit on the number of areas that could qualify as HUBZones pursuant to the definition of “qualified census tract” that was in effect at the time we issued our February 2015 report. We found that about 2,400 more census tracts would qualify as HUBZones if the 20 percent cap were not in place, an increase of 15 percent from the number of qualified tracts as of June 2014. Chairman Chabot and Ranking Member Velázquez, this concludes my statement. I would be pleased to respond to any questions you or other Members of the Committee may have. If you or your staff have any questions about this testimony, please contact William B. Shear, Director, Financial Markets and Community Investment, at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are Harry Medina (Assistant Director), Daniel Newman (Analyst-in-Charge), Pamela Davidson, John McGrail, and Barbara Roesmann. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The purpose of the HUBZone program is to stimulate economic development in economically distressed areas. SBA certified HUBZone firms are eligible for federal contracting benefits, including limited competition awards such as sole-source and set-aside contracts. Small firms in SBA's HUBZone program had almost $6.6 billion in obligations on active federal contracts for calendar year 2015. This testimony includes a discussion of (1) how SBA communicates changes in HUBZone designations to firms, including how SBA addressed GAO's 2015 recommendation to improve this process, and (2) SBA's certification and recertification processes for firms, including how SBA addressed GAO's 2015 recommendation to improve recertification. GAO relied on the work supporting its February 2015 report on SBA's oversight of the program ( GAO-15-234 ) and its March 2016 report on actions taken in response to GAO recommendations ( GAO-16-4232R ), as well as July and August 2016 interviews with SBA officials on efforts the agency had undertaken to implement GAO's recommendations. As of August 2016, the Small Business Administration (SBA) had taken steps to better inform firms about changes in the designations of Historically Underutilized Business Zones (HUBZones) but had not yet fully implemented GAO's February 2015 recommendation to improve this process. SBA primarily designates economically distressed areas as HUBZones, based on demographic data such as unemployment and poverty rates. The designations include certain census tracts and counties and are subject to periodic changes as economic conditions change. HUBZones that lose qualifying status due to changes in economic conditions become “redesignated” and undergo a 3-year transition period. After the 3-year period, HUBZone certified firms in these areas can no longer apply to and participate in the program and receive contracting preferences. GAO found in February 2015 that SBA's communications to firms about programmatic changes (including redesignation) generally were not specific to affected firms and thus some firms might not have been informed they would lose eligibility. GAO recommended SBA better ensure firms were notified of changes that might affect program participation. In response, SBA revised its approval letters to newly certified firms to include information about the consequences of redesignation (if applicable). But as of August 2016, SBA had not yet implemented changes to help ensure all currently certified firms would be notified of changes that could affect their program eligibility. SBA officials recently told GAO the agency intended to develop a technology solution by spring 2017 to help address GAO's recommendations. While SBA made changes to its certification and recertification processes, SBA had not fully addressed GAO's recommendation on recertification of firms. To receive initial certification, SBA requires all firms to provide documentation to show they meet the eligibility requirements. SBA also conducts site visits at selected firms (for example, based on the amount of federal contracts received). According to HUBZone regulations, firms wishing to remain in the program without any interruption must recertify their continued eligibility to SBA within 30 days after the third anniversary of their certification date and each subsequent 3-year period. But in 2015, GAO found SBA did not require firms seeking recertification to submit any information to verify continued eligibility and instead relied on firms' attestations of continued eligibility. GAO also found SBA had a backlog for recertifying firms. GAO recommended in February 2015 that SBA implement additional controls for recertification, including criteria for requesting and verifying firm information, and ensuring sufficient staffing for the process so that significant backlogs would not recur. As of August 2016, SBA had plans to eliminate the backlog, but had not issued guidance on requesting supporting documents. SBA officials stated that any potential risk of fraud during recertification would be mitigated by SBA's site visits of firms. But as GAO stated in 2015 and reiterated in 2016, SBA only conducts site visits to about 10 percent of certified firms on an annual basis and characteristics of firms often can change, therefore relying on site visits is not adequate to mitigate this risk.
An assessment is a formal bookkeeping entry in which IRS records the amount of tax, penalty, or interest charged to a taxpayer’s account each tax year. An assessment establishes the taxpayer’s liability and IRS’ right to collect. Taxpayers essentially assess themselves when they report these taxes on their tax returns. IRS may add to or subtract from tax amounts reported when its returns processing or enforcement programs identify errors. Taxpayers also may file an amended return or otherwise notify IRS of errors, which can change the amount assessed. An abatement is a formal bookkeeping entry to record a reduction of tax, penalty, or interest assessments on a taxpayer’s account. Abatements reduce the amounts that taxpayers owe and that IRS has a right to collect. Section 6404 of the Internal Revenue Code authorizes IRS to abate an assessment under certain conditions. For example, IRS can abate an assessment because of errors made. A taxpayer can make an error on the original tax return, such as not claiming a deduction. Or, IRS may assess incorrect tax amounts when auditing a return or matching income reported by taxpayers with income reported by third parties (such as employers) on payments made to the taxpayers. Both taxpayers and IRS can initiate abatements. Taxpayers can request an abatement by filing an amended tax return (e.g., Form 1040X), by filing an IRS Form 843 (Claim for Refund and Request for Abatement), or by calling or writing to IRS. IRS can also initiate abatements. When, for example, an IRS auditor finds evidence that a taxpayer overstated the tax liability on a return, this evidence could lead to an abatement, depending on the results from the rest of the audit. To fulfill our objectives, we reviewed a stratified, random sample of 486 individual taxpayer abatements made in fiscal year 1998 due to errors by taxpayers, IRS, or third parties. Our sample includes abatements in which taxpayers elected to change their filing status or basis for deductions when the original assessment also changed. If a taxpayer had multiple abatements, we studied only the abatement that was drawn into our sample. We obtained the final sample of 486 abatements by first drawing a stratified sample of 500 abatements from a population of 2,351,194 abatements for individual taxpayers who had at least one abatement in 1998 associated with an error in the assessment. After reviewing the 500 IRS case files, we removed 14 sampled abatements that did not contain these errors, such as abatements due to net operating loss carry backs, debt discharges, and substitute returns. On the basis of our final sample, we estimated that the total population of 1998 abatements based on errors was about 2.3 million with a value of about $3.6 billion. Each abatement could be associated with more than one error. For each error, we collected information on which line item on the return was in error. For example, an error could involve lines for the primary taxpayer’s SSN, a tax exemption, or a tax deduction. To the extent possible, we also collected information on whether taxpayer, IRS, or third-party actions led to an error. Because so many of the errors related to exemptions, we collected more information, such as whose exemption (taxpayer, spouse, dependent) was in error and what information was missing or incorrect. To develop data about IRS’ costs to abate tax assessments made due to errors by taxpayers, IRS, or third parties, we used IRS’ abatement files to identify the type and frequency of IRS activities associated with the recording, collecting, and abating of these assessments. We then talked to IRS officials about the unit costs of these activities and the magnitude of the overall costs for all activities. These officials represented IRS’ Wage and Investment Division, IRS’ Chief of Operations and IRS units that record, collect, or abate assessments, such as those that process or examine tax returns. We attempted to develop more reliable cost estimates but IRS was unable to provide sufficient data in time for including in this report. To describe the types of taxpayer costs, we recorded the types of activities associated with each sampled abatement case. We discussed these activities with IRS officials to understand the potential impacts on costs. On the basis of this work, we summarized the range of activities and of time (in calendar days) that taxpayers faced to get the tax assessments abated. We did not have enough information to compute taxpayers’ costs. After our analysis, we sought feedback from IRS officials at the National Office on our sample results and the costs. We sought IRS documentation and views on options for reducing the number of assessments that involved exemptions and that were abated due to errors. We also sought views on these options from officials of four professional associations— the American Institute of Certified Public Accountants, National Association of Enrolled Agents, National Association of Tax Practitioners, and National Society of Accountants. Because we selected the probability sample following random selection procedures, each estimate is surrounded by a 95-percent confidence interval. For example, the estimate that 86 percent of the abatements is due to taxpayer errors is surrounded by a 95-percent confidence interval of +/- 3 percentage points. This shows that we are 95-percent confident that the percentage of taxpayer errors in the actual population is between 82 and 89 percent. All percentage estimates have sampling errors of +/- 6 percentage points or less, unless otherwise noted. Estimates on numbers of abatements have sampling errors of +/- 6 percent or less of their values unless otherwise noted. We did work at IRS offices in Washington, D.C., and New Carrollton, MD, as well as IRS’ Kansas City Service Center because of our staff’s proximity. We did our work from November 1999 through December 2000 in accordance with generally accepted government auditing standards. We discussed our draft report with representatives of the IRS Commissioner on March 23, 2001. IRS officials agreed that our recommendations had merit and said they would review ways to implement them. Their written comments arrived too late to be reprinted in this report. We traced most abated income tax assessments in fiscal year 1998 to individual taxpayer errors. These errors usually involved claims for tax exemptions for the taxpayers, spouses, or dependents. Taxpayers usually erred by not providing any information or by providing inaccurate information. With exemption claims, these errors usually involved SSNs for dependent exemptions. We estimated that 86 percent of the 2.3 million abated assessments arose from taxpayer errors, and 6 percent arose from IRS errors. Sources for the remaining errors were third parties or could not be determined due to insufficient data in IRS’ abatement case files. We further analyzed how the errors were made. For taxpayer errors, about 74 percent of the abated tax assessments were associated with taxpayers not correctly reporting an item on the tax return; and about 22 percent were associated with taxpayers omitting the item from the return. For IRS errors associated with abated tax assessments, most of the errors occurred when the IRS unit that processes tax returns did not accept valid information from these returns. Third-party errors generally arose due to errors in the Social Security Administration database used by IRS to validate names and SSNs related to the exemption claims. For fiscal year 1998, an estimated 50 percent of errors that led to the 2.3 million abated tax assessments involved exemptions claimed on income tax returns for taxpayers, spouses, or dependents. The remaining abated assessments involved errors with many other types of claims at much lower percentages. The next two most frequent errors—Schedule A deductions (e.g., real estate taxes) and the other income line on the tax return—each accounted for around 10 percent. Of the exemption errors, we estimated that about 96 percent involved dependent exemptions; and the rest involved exemptions for the spouse of the primary taxpayer filing the tax return. For dependent exemptions, about 32 percent of the errors were missing SSNs; and about 50 percent were incorrect SSNs. The remaining errors involved the names of the dependents or could not be determined due to insufficient data. These errors in names and SSNs occurred in many ways. For example, in one instance, an exemption for a dependent was disallowed because the taxpayer used the spouse’s SSN instead of the dependent’s. Other dependent exemption errors either occurred because no SSN was reported, SSNs were transposed, or an SSN was reported as a progression of numbers (i.e., 123-456-789). Examples also included a taxpayer who used the same SSN for two different dependents and a taxpayer who used the last four digits of an SSN for two dependents. Other errors related to the dependent’s last name not matching SSA records and a spousal exemption disallowed because the spouse’s last name did not match SSA data. The exemption errors were not limited to a type of taxpayer. An estimated 256,000 or more taxpayers with incomes below $25,000 made such errors as well as about 107,000 taxpayers with incomes over $100,000. Most exemption errors came from individuals who had no business income, but about 47,000 taxpayers who had business income made exemption errors. IRS did not track its costs to record, collect, and then abate the 2.3 million tax assessments that were made due to errors by taxpayers, IRS, or third parties. IRS agreed that these overall costs could be substantial, totaling at least tens of millions of dollars annually. IRS was unable to provide accurate cost data for developing more reliable estimates in time for this report. However, IRS agreed that having such accurate cost information is important and plans to develop it. We identified three broad activities—recording, collecting, and abating the tax assessments—associated with IRS’ costs. Table 1 shows the frequencies of these broad activities. IRS incurred additional costs to record an estimated 1.3 million of the 2.3 million tax assessments abated due to errors. In such cases, IRS did more work to make additional tax assessments. IRS’ usual costs include those to record the tax assessment originally reported on the tax returns. The additional costs would have been avoided if the errors and additional assessments had not been made. The other 1 million tax assessments did not incur additional recording costs. Various IRS units can be involved in recording increases or decreases to the original tax assessment reported on the tax return. IRS’ processing units can increase the original assessment because of more obvious errors, such as invalid SSNs. IRS post-processing units also can make additional tax assessments that will be recorded. For example, when the taxpayer files an amended tax return or third-party reports indicate that a taxpayer did not report all income, IRS’ adjustment units can increase the assessment. IRS’ examination units can increase assessments during audits when taxpayers do not provide documents to support their tax return. The cost to record tax assessments is higher when a post-processing unit does the work to create an assessment. The greatest cost is associated with examination units because their auditors have higher pay grades, audit more complex cases, and need more time to work cases compared to nonaudit staff. IRS attempted to collect an estimated 609,000 of the 2.3 million abated tax assessments. IRS could have avoided the associated collection costs if the tax assessment, which was abated because of errors, had not been made. For the portion that was abated of the other 1.7 million tax assessments, IRS did not take collection action. The process for collecting any unpaid assessment has three steps. The process starts with a series of computer-generated notices demanding payment or information to otherwise resolve the unpaid assessment. If the unpaid assessment is not resolved, IRS might try to call the taxpayer through its Automated Collection System. If still unresolved, IRS might assign a field collector to visit the taxpayer. The costs of the collection activity increase substantially with each step. For example, each notice costs a fraction of what a field collector visit costs. IRS also incurred additional costs to abate the 2.3 million tax assessments. Abatements involve three types of costs, as described below. First, IRS incurs costs to process abatement requests. These costs are relatively low compared to later steps. Abatement requests include formal claims (Form 1040X or Form 843) and informal requests when taxpayers call or write IRS requesting abatements. Compared with informal requests, formal claims are more costly because of the extra costs to record them on IRS’ masterfile of taxpayer accounts to show the pending formal request. Second, IRS incurs costs to make and record the abatement decision. Some abatement decisions are made in IRS examination units while the bulk are done in nonexamination units. The unit cost of abatements made in examination units is higher because, among other reasons, they use higher-graded (or paid) staff. Third, IRS incurs costs for some abatements when issuing a refund check that otherwise would have been unnecessary. This check is for tax amounts overpaid by individuals because of the assessment made in error. Taxpayers also incurred costs when tax assessments were made and then abated. However, the amount is currently unknown. IRS’ abatement files lacked sufficient information on taxpayers’ activities, efforts to contact IRS, and time spent in order to estimate taxpayer costs. Further, taxpayers usually do not record and maintain such information. As part of its multiyear effort to estimate taxpayer compliance burdens, IRS is designing a methodology intended to estimate taxpayer costs for abatements and other post-filing activities. Although we could not measure the costs to taxpayers, we were able to estimate the number of taxpayers involved in two actions that imposed some level of burden. For both types of actions, taxpayers incurred costs in time and money, especially if they used a paid preparer. Of the 2.3 million abated assessments, an estimated 609,000 involved IRS contacts with taxpayers to collect an assessment before it was abated, of which the vast majority were IRS notices; and an estimated 575,000 involved amended tax returns that taxpayers filed to correct errors and request the tax abatements. The amount of variation in taxpayer costs also cannot be quantified currently. Taxpayer costs can vary, depending on the actions and time required to correct the error and abate the tax assessment. Taxpayer actions could include finding the error, gathering documents to support the abatement, communicating with IRS, providing any requested documentation, and responding to IRS notices. Contacts with IRS could be as inexpensive as a toll-free telephone call or as costly as paying the fees of tax professionals to work with IRS. In terms of calendar time required, most abatements we reviewed took 3 months or less to be approved, but some took much longer. At one extreme in our sample, IRS notified a taxpayer through a collection notice that it had assessed additional taxes after disallowing a dependent exemption while processing the tax return. The taxpayer called IRS to provide the dependent’s correct SSN. After verifying the SSN, IRS made the abatement 21 days after the date of the collection notice. At the other extreme in our sample, IRS disallowed two dependent exemptions, assessed additional tax, and sent a collection notice. The taxpayer eventually provided sufficient documents to support the exemptions and justify the abatement. However, the abatement took 493 days from the date of the first collection notice. During this time, IRS sent multiple correspondence and collection notices to the taxpayer about the unpaid assessment and abatement request. IRS files also showed variation in the amount of taxpayer documentation and number of IRS contacts required for abatements. In another case from our sample, multiple exchanges between IRS and a taxpayer took 245 days from the first collection notice to the abatement. IRS had notified the taxpayer that an additional tax form was needed, and the taxpayer’s representative returned the completed form to IRS showing no additional tax due. About 1 month later, IRS sent its version of the form showing additional tax due followed by four subsequent bills. The taxpayer hired a second representative who wrote IRS, sent documents, and faxed a form showing no additional tax due. IRS assigned the case to an IRS office that resolves difficult cases. The second representative faxed another copy of the form to this office to get the tax assessment abated. IRS has taken steps to avoid exemption errors or correct certain types of exemption errors earlier during returns processing rather than through the abatement process. However, IRS has not taken other steps that could potentially correct more errors earlier. If the errors were not made or were corrected earlier, IRS would not create tax assessments that need to be abated, which could reduce taxpayer and IRS costs. We focused on reducing exemption errors because, as discussed earlier, they accounted for half of the 2.3 million assessments that were abated in fiscal year 1998. After reviewing our data on the number of exemption errors that lead to abatements, IRS took a step intended to prevent some of the errors. For tax year 2000 returns, IRS revised tax return instructions to state that (1) the name and SSN entered on the tax return should agree with the Social Security card to avoid losing the exemption as well as tax benefits, such as the Earned Income Credit and (2) taxpayers should call the Social Security Administration to resolve any discrepancy. IRS made the revisions because a test of expanded SSN matching for spouse exemptions claimed on joint tax returns revealed errors with the names and SSNs of the spouses. IRS officials believe that these revisions will help reduce such errors, but the actual effects will not be known until after tax year 2000 returns are processed. IRS recently decided to revise its procedures during returns processing in an effort to correct some of the remaining exemption errors earlier. However, the largest category of exemption errors, errors with the dependent exemption, will not be corrected by the revisions. These revised procedures involve IRS’ math-error program. About one million of the exemption-related tax assessments were created during returns processing through IRS’ math-error program. In this program, IRS uses computers to find arithmetic errors on tax returns as well as errors in reporting SSNs, exemptions, and certain other items. When it finds math errors on paper returns, IRS processes the return, assesses the tax, and contacts the taxpayer to disclose the reason for the additional tax assessment and to request payment. Afterward, IRS uses its abatement process to correct any errors and eliminate the tax assessments. Further changes to the math-error program could help to correct errors earlier and avoid assessing taxes that will be abated. Specifically, after IRS’ computerized processing detects exemption errors, computerized checks of previous tax returns could help to correct simple errors, such as transposed SSNs or misspelled or changed names. IRS now does these checks after returns processing when abatements are requested. To notify taxpayers of the corrections and need to avoid such errors, IRS could send so-called “soft notices”—notices that do not ask taxpayers to provide information or pay additional taxes. If the name or SSN were changed or missing, IRS could suspend processing and contact taxpayers. In effect, this would treat paper returns the same as electronic returns. As noted earlier, IRS does not accept electronic returns with math errors. If the contacts do not resolve the errors or if more effort is required, IRS could continue its practice of disallowing the exemptions and assessing additional taxes. After we shared the results of our work with IRS, IRS decided to change its procedures, effective January 28, 2001, to correct name and SSN errors with claims for spousal exemptions earlier. IRS officials decided to do the checks for correcting these errors during returns processing. Before disallowing exemptions and assessing additional taxes, IRS staff are to check spouses’ names and SSNs on tax returns against IRS and Social Security Administration computer data in an attempt to correct the errors. Recently, an IRS official told us that IRS is considering taking two other steps to correct exemption errors during returns processing. The first step would be doing earlier checks for erroneous dependent exemptions. The second step would be to contact taxpayers for any type of exemption error that had not been corrected by the checks. This same official said that IRS had concerns about the time and costs to contact taxpayers during returns processing in order to correct exemption errors. However, IRS did not provide any documentation or details on how or when decisions about these earlier checks and contacts would be made. Considering steps to correct exemption errors earlier is worthwhile because of the potential benefits to a large number of taxpayers. While neither the benefits to taxpayers nor the costs to IRS can be quantified, some information is known. First, IRS already abates almost all tax assessments created through the math-error program for exemption errors in missing or inaccurate names or SSNs. According to our analysis, IRS abated at least 87 percent of the additional tax assessments for name or SSN errors in exemptions claimed on tax year 1997 returns. In deciding whether to approve abatements requested by taxpayers, IRS checks previous tax returns and contacts taxpayers, as needed. Since abatements are requested for almost all of these assessments, IRS is already doing the checks and contacts. Second, although taxpayers’ costs have not been quantified, correcting the exemption errors earlier could reduce the costs that taxpayers incur to correct errors through abatements. To the extent that the earlier checks correct the errors, fewer taxpayers would be contacted. Even if contacted, taxpayers would likely have an easier time finding or compiling records or working with tax representatives because the errors would be found sooner. Some taxpayers (about 26 percent of the cases we reviewed) would no longer face IRS collection actions. The extent to which taxpayer costs would be reduced could be influenced by, among other factors, how many exemption errors continue to be made after IRS clarified its instructions for claiming exemptions. In discussing taxpayer burdens, representatives we interviewed from four groups of tax professionals generally favored the idea of doing the checks and contacts earlier to avoid tax assessments that have to be abated. They said that taking care of the errors earlier would reduce taxpayer burden, particularly when the errors lead to a series of written and telephone contacts to get the taxes abated. Third, IRS could reduce annual operating costs by correcting exemption errors earlier rather than later through abatements. We could not estimate the amount of operating cost savings because available data did not allow us to quantify the costs associated with exemption errors separately from the costs for other types of errors. Even so, by correcting errors earlier, IRS would no longer incur costs to record and collect the tax assessments that would be abated. Nor would IRS incur some of the costs of making abatements, including the costs of processing abatement requests or issuing refunds after abatements are granted. Other costs savings could occur from using lower paid staff, rather than audit staff, to make the checks and contacts earlier. Whether IRS would have overall cost savings depends on the costs to implement the earlier checks and contacts. These one-time costs would offset IRS’ operating cost savings to some extent. Implementation costs could include, if needed, new equipment, computer programming, moving equipment or staff, and training. IRS did not have data on the magnitude of these one-time costs. Finding ways to reduce the remaining 1.1 million assessments that are abated due to nonexemption errors will be challenging. As discussed earlier, these remaining abated assessments involve a variety of errors that occur infrequently. Since little is known about these errors or their causes, a promising first step towards reducing the errors would be to do research on their causes and on ways to avoid the errors. Doing research on the nonexemption errors has the potential to benefit a large number of taxpayers. However, research also incurs costs, which we did not attempt to estimate. Such costs would depend on the design, scope, and depth of the studies. Approximately one million taxpayers per year, as well as IRS, incur costs to abate tax assessments created due to exemption errors. Avoiding the errors, or correcting them earlier, could reduce the burden on taxpayers of complying with tax laws. IRS has taken one step intended to help taxpayers avoid these errors— revising instructions for claiming exemptions. In another step, aimed at correcting some exemption errors that continue to be made, IRS decided to do checks of name and SSN errors for spousal exemption claims during returns processing. In addition, IRS is considering implementing earlier—during returns processing—checks for dependent exemption errors and taxpayer contacts if the checks do not correct exemption errors. Considering doing such checks and contacts earlier is worthwhile. While the cost savings to IRS are not known, a large number of taxpayers could benefit. However, IRS did not provide us with any details or documentation about how or when decisions would be made. Little is known about how to reduce nonexemption errors that lead to assessments being abated. Because over one million taxpayers were burdened by such assessments, research to reduce the errors is worth considering. Regarding name and SSN errors, we recommend that the Commissioner of Internal Revenue make a determination on whether the costs and benefits justify implementing earlier—during returns processing—(1) checks for dependent exemption errors and (2) taxpayer contacts, as needed, for the remaining errors in any type of exemption claim. Regarding the nonexemption errors that lead to assessments that are later abated, we recommend that the Commissioner of Internal Revenue determine whether research to identify causes and solutions is justified. We discussed our draft report on March 23, 2001, with IRS officials from Wage and Investment who were representing the IRS Commissioner. They agreed to implement both of our recommendations. First, they said that IRS would review the costs and benefits of doing checks and contacts during returns processing for dependent exemption errors. Second, they said that IRS would review available data on nonexemption errors to determine the merits of researching their causes, and solutions. IRS was unable to provide written comments in time for inclusion in this report. We are sending copies of this report to Representative Charles B. Rangel, Ranking Minority Member, House Committee on Ways and Means; Representative William J. Coyne, Ranking Minority Member, Subcommittee on Oversight, House Committee on Ways and Means; and Senator Charles E. Grassley, Chairman, and Senator Max S. Baucus, Ranking Member, Senate Committee on Finance. We also are sending copies to the Honorable Paul H. O’Neill, Secretary of the Treasury; the Honorable Charles O. Rossotti, Commissioner of Internal Revenue; the Honorable Mitchell E. Daniels, Jr., Director, Office of Management and Budget; and other interested parties. Copies of this report will be made available to others upon request. If you have any questions concerning this report, please contact Tom Short or me at (202) 512-9110. Key contributors to this work are listed in appendix I. In addition, Royce Baker, Larry Dandridge, Thomas Venezia, James Fields, Anne Rhodes-Kline, Sam Scrutchins, Thomas Bloom, and Rodney Hobbs contributed to this report.
About one million taxpayers per year, as well as the Internal Revenue Service (IRS), incur costs to abate tax assessments created due to tax exemption errors. Avoiding the errors, or correcting them earlier, could reduce the burden on taxpayers of complying with tax laws. IRS has taken one step intended to help taxpayers avoid these errors--revising instructions for claiming exemptions. In another step, aimed at correcting some exemption errors that continue to be made, IRS decided to check name and social security number errors for spousal exemption claims during returns processing. In addition, IRS is considering implementing earlier--during returns processing--checks for dependent exemption errors and taxpayer contacts if the checks do not correct exemption errors. Doing such checks and contacts earlier is worthwhile. Although the cost savings to IRS are unknown, many taxpayers would benefit. However, IRS did not provide GAO with any details or documentation about how or when decisions would be made. Little is known about how to reduce nonexemption errors that lead to assessments being abated. Because more than one million taxpayers were burdened by such assessments, research to reduce the errors is worth considering.
Defined benefit pension plans are intended to pay retirement benefits that are generally based on an employee’s years of service and other factors. The financial condition of these plans—and hence their ability to pay retirement benefits when due—depends on adequate contributions from employers and sometimes employees, and prudent investments that yield an adequate rate of return over time. Poor investment choices can have serious implications for both the plan sponsor and, potentially, plan beneficiaries. Poor investment results may necessitate greater contributions by the plan sponsor, which could result in lower profits in the case of a private plan sponsor, or higher taxes in the case of a public plan. In some cases, the plan sponsor could opt to require greater participant contributions or reduce future retiree benefits. Plan sponsors generally try to maximize returns for an acceptable level of risk and, in doing so, may invest in various categories of asset classes, which for many years have consisted mainly of stocks and bonds. Plan sponsors may also invest in other asset classes or trading strategies, sometimes referred to as alternative investments—which can include a wide range of assets such as hedge funds, private equity, real estate, and commodities. Plans may make such investments in an effort to diversify their portfolios, achieve higher returns or for other reasons. In recent years, hedge funds and private equity have been two of the most common alternative assets held by institutional investors such as public and private pension plans. Although there is no universally accepted definition of hedge funds, the term is commonly used to describe pooled investment vehicles that are privately organized and administered by professional managers who often engage in active trading of various types of securities, commodity futures, options contracts, and other investment vehicles. Hedge funds can also hold relatively illiquid and hard-to-value investments such as real estate or shares in private equity funds. Although hedge funds have a reputation of being risky investments that seek exceptional returns, this was not their original purpose, and is not true of all hedge funds today. Established in the 1940s, one of the first hedge funds invested in equities and used leverage and short selling to protect, or “hedge” the portfolio from its exposure to the stock market.investment portfolios and engaged in a wider variety of investments strategies. As GAO reported in 2008, defined benefit pension plans have invested in hedge funds for a number of reasons, including the desire for investment returns that exceed the returns available in the stock market or obtaining steadier, less volatile returns. Over time, hedge funds diversified their Likewise, there is no commonly accepted definition of private equity funds, but such funds are generally privately managed pools of capital that invest in companies, many of which are not listed on a stock exchange. Unlike many hedge funds, private equity funds typically make longer-term investments in private companies. Private equity funds also seek to obtain financial returns through long-term appreciation based on active management. Strategies of private equity funds vary, but most funds target either venture capital or buy-out opportunities. Venture capital funds invest in young companies that often are developing a new product or technology. Private equity fund managers may provide expertise to a fledgling company to help it become suitable for an initial public offering. Buy-out funds generally invest in larger established companies in order to add value, in part, by increasing efficiencies and, in some cases, consolidating resources by merging complementary businesses or technologies. For both venture capital and buy-out strategies, investors hope to profit when the company is eventually sold, either when offered to the public or when sold to another investor or company. Unlike stocks and bonds, which are traded and priced in public markets, plans have limited information on the value of private equity investments until the underlying holdings are sold. Traditionally, hedge funds and private equity funds and their managers have been exempt from certain registration, disclosure and other requirements under various federal securities laws. The presumption is that investors in such vehicles have the sophistication to understand the risks involved in investing in them and the resources to absorb any losses they may suffer. However, as a result of the Dodd-Frank Act, the managers of such investment vehicles will be regulated in ways that they have not been previously. For example, hedge fund and private equity managers will generally now be required to register with the SEC, establish a specific regulatory compliance program, and comply with various record-keeping requirements. While these fund managers must now register with the SEC, the funds they manage will remain unregistered. Unlike other investment funds—such as mutual funds—that register with the SEC, hedge funds and private equity funds are thus not subject to certain requirements, such as limitations on leverage and minimum requirements relating to corporate governance. Private sector pension plan investment decisions must comply with provisions of Employee Retirement Income Security Act (ERISA), which set forth fiduciary standards based on the principle of a prudent standard of care. Under ERISA, plan sponsors and other fiduciaries must (1) act solely in the interest of the plan participants and beneficiaries and in accordance with plan documents; (2) invest with the care, skill, and diligence of a prudent person familiar with such matters; and (3) diversify plan investments to minimize the risk of large losses. Under ERISA, the prudence of any individual investment is considered in the context of the Public sector plans, such as total plan portfolio, rather than in isolation. those at the state, county, and municipal levels, are not subject to funding, vesting, and most other requirements applicable to private sector defined benefit pension plans under ERISA, but must follow requirements established for them under applicable state law. Many states have enacted standards comparable to those of ERISA. ERISA’s “prudent man” standard with respect to investment duties is treated under 29 C.F.R. § 2550.404a-1(b). In general, it provides that that the prudent man standard is satisfied if the fiduciary has given appropriate consideration, among other facts and circumstances, to the following factors (1) the composition of the plan portfolio with regard to diversification of risk; (2) the volatility of the plan investment portfolio with regard to general movements of investment prices; (3) the liquidity of the plan investment portfolio relative to the funding objectives of the plan; (4) the projected return of the plan investment portfolio relative to the funding objectives of the plan; and (5) the prevailing and projected economic conditions of the entities in which the plan has invested and proposes to invest. In 2008, we reported on plan investments in hedge funds and private equity, including a discussion of the benefits that plan fiduciaries seek and challenges they face in doing so. We concluded that, because these investments require a degree of fiduciary effort well beyond that required by more traditional investments, doing so can be a difficult challenge, especially for smaller plans. Such plans may not have the expertise or financial resources to be fully aware of these challenges, or have the ability to address them through negotiation, due diligence, and monitoring. Further, we noted that, while plans are responsible for making prudent choices when investing in any asset, the Department of Labor (Labor) also has a role in helping to ensure that pension plan sponsors fulfill their fiduciary duties in managing pension plans that are subject to ERISA. This can include educating employers and service providers about their fiduciary responsibilities under ERISA. In light of these duties, and the risks and challenges of investing in hedge funds and private equity, we recommended that the Secretary of Labor issue guidance specifically designed for qualified plans under ERISA. We specifically called for guidance that would (1) outline the unique challenges of investing in hedge funds and private equity; (2) describe steps that plans should take to address these challenges and help meet ERISA requirements; and (3) explain the implications of these challenges and steps for smaller plans. To date, Labor has not implemented this recommendation. In responding to GAO’s 2008 recommendation, Labor noted that while it would consider the recommendation, the lack of uniformity among hedge funds and private equity funds could make development of comprehensive and useful guidance difficult. Hedge fund and private equity indexes show that these investments were significantly affected by the financial market turbulence of recent years, and plans and experts we contacted indicated that pension plan investments were not insulated from losses. According to a composite hedge fund index, in the midst of the financial crisis, hedge funds produced quarterly losses as great as 16 percent in the last quarter of 2008. Similarly, a private equity index measured losses throughout most of 2008, with losses of a little more than 15 percent in the last quarter. In comparison, the stock market, as measured by the Standard and Poor’s 500 index, declined in value by close to 40 percent in 2008 (see table 1 for a comparison of recent data from various indexes). Our in-depth discussions with plan representatives were largely consistent with these national trends. Although not all plan sponsor representatives we interviewed reported specific performance data, a number of plan representatives disclosed peak annual hedge fund losses in 2008 or 2009 ranging from about 12 percent to about 25 percent. Pension plan representatives we interviewed generally reported more favorable performance for private equity. Although a few plan representatives reported private equity returns that were somewhat lower than in previous years, one plan reported a close to 20 percent loss for their private equity portfolio in 2009. Despite experiencing some significant losses during the financial crisis, representatives of selected plan sponsors we contacted generally told us that both their hedge fund and private equity investments met their expectations over the last 5 years given their reasons for investing. Most of the 22 pension plan representatives we contacted indicated that hedge fund investments met their expectations given their reasons for investing. In 2008, we reported that many plans had invested in hedge funds in response to prior significant stock market losses, and because they were seeking specific benefits such as achieving (1) lower volatility; (2) a more diversified portfolio by investing in a vehicle that would not be correlated with other asset classes in the portfolio; and (3) returns greater than those expected in the stock market. Given these reasons for investing in hedge funds, most of the 22 plan representatives we interviewed for this report said that these investments met plan expectations (see table 2 for an overview of the responses). Representatives of several plans stressed the moderating impact of hedge fund investments by noting their ability to provide less price volatility than other investments. One plan representative observed, that even with hedge fund fees, their losses of 14 percent were still preferable to stock market losses of 40 percent. Representatives from another plan explained that, although hedge fund performance more closely paralleled the stock market during the period than desired, there was generally no safe haven and that hedge fund investments have generally performed well. A few plan representatives noted that hedge funds delivered lower Representatives from one plan were volatility than other investments.particularly satisfied with how the plan’s hedge fund investments helped limit overall portfolio risks, noting that although returns were below benchmarks, the hedge funds provided much less volatility than the plan’s publicly traded stock holdings. Similarly, representatives from another plan noted that, since 2002, hedge funds have provided adequate returns, but with much less volatility than publicly traded stocks. Additionally, representatives from one plan, who had not invested in hedge funds when we interviewed them for our 2008 report, have recently begun implementing a relatively small hedge fund allocation that they believe will complement the rest of their portfolio and provide greater diversification benefits, including reducing overall portfolio volatility. Some plan sponsor representatives stressed the positive long-term performance of their hedge fund investments, despite intervals of poor performance. While these plan representatives would have preferred better hedge fund performance during the 2008-2009 financial crisis, hedge funds have nonetheless filled an important long-term role in these plans’ portfolios. Representatives from one plan noted hedge fund losses of about 12 percent during 2009 but indicated that overall since 2004 these investments have performed well. Representatives from one plan told us that while they were disappointed by the size of hedge fund losses in 2008-2009, these investments have generally beaten long-term benchmarks and have recovered since the crisis. Moreover, they noted that compounded over the last 15 years, the plan’s hedge fund investment returns are about twice those of the stock market. These plan representatives also emphasized the importance of hedge funds, as well as other alternative investments, to long-term investment returns by noting that investing solely in fixed income investments would not have sustained the plan’s funding needs, particularly given that the plan sponsor had not made plan contributions in over 20 years. In contrast, a number of plan sponsor representatives and experts noted that hedge funds did not perform as expected. Representatives from one plan explained that they expected these investments to provide an absolute return—positive return regardless of the conditions in the stock market—in exchange for muted returns in robust markets. Another plan representative noted that while he understood these “absolute return” funds may not always generate positive returns in all market environments, he expected their hedge funds to deliver better than the more than 20 percent losses they experienced from 2008-2009. Similarly, a representative from one plan expected hedge fund investments to perform more independently of stock market trends and was surprised and disappointed by the magnitude of the negative returns. This representative told us that for every dollar of loss in the 2008-2009 stock market, their hedge fund investments lost two-thirds of a dollar. A few experts noted that pension plan hedge fund investments were more correlated than expected with the public markets during the financial crisis, resulting in what one expert referred to as exacerbated losses. For example, one expert noted that some plan representatives may have overpaid for mediocre returns when they paid hedge fund performance and management fees to obtain returns similar to the stock market. Further, one specific hedge fund strategy performed poorly. Several plans singled out the so-called “portable alpha” strategy, which typically employs hedge funds in order to generate returns that exceed common market benchmarks. A representative of one plan told us that the plan’s portable alpha program was hugely disappointing and consequently being dismantled. Specifically, in 2008-2009, a portion of the investment lost considerable value when the stock market fell by more than 30 percent. Some plan representatives and one surveyed expert singled out the impact of fees on net performance. One expert cited the extra layer of fees charged by funds of funds managers, asserting that these fees substantially lowered plans’ net returns. Similarly, a plan representative we spoke with found hedge fund fees at the individual fund level to be eroding investment returns. This representative noted while the plan’s hedge fund gross return has been outperforming the rest of the portfolio, the investment has underperformed after fees have been deducted for the last few years. For this reason, the plan is consciously lowering its investment allocation in hedge funds. A representative from another plan noted dissatisfaction with the plan’s hedge fund of funds investment as one of the reasons that the plan had chosen not to reinvest and was considering firing the fund manager. The experience of plans with private equity investments should be considered in the context of the long-term nature of these investments, which require lengthy financial commitments and delayed financial returns (see fig. 1). Given the long-term nature of private equity investments, nearly all of the 22 pension plan representatives we interviewed were generally satisfied with their private equity investments over the last 5 years. Based on findings from our 2008 report, plans we interviewed generally invested in private equity to attain higher returns than the stock market offered, in exchange for greater risk. Given these reasons for investing in private equity, 20 of 22 plan representatives reported that the plan’s private equity investments met plan expectations. Further, nearly half of plan representatives indicated that their plans’ private equity investments outperformed public equities over the last 5 years. For at least one plan, private equity was the highest performing asset class. In particular, several plan representatives and surveyed experts noted that opportunistic investments, those investments that take advantage of underperformance during market cycles, such as distressed debt, performed relatively well during the last 5 years. Representatives from at least one plan said they were disappointed to have had insufficient capital available to invest more heavily in some of these opportunities. Like many of the plan representatives we interviewed, experts we surveyed largely found private equity investment performance for the period to be positive. Although plan representatives we interviewed almost unanimously reported favorable results regarding private equity, this has not necessarily been true of all plans over the last 5 years. As we reported in 2008, compared with other asset classes, performance varied widely among private equity funds. For this reason, plan representatives emphasized the importance of investing in the top funds, some noting that they would not invest in private equity unless they could invest in funds considered to be in the top quartile. Three of the experts we surveyed in 2011 also noted varying performance among private equity funds. One expert noted a wide dispersion among the performance of private equity funds, and that this dispersion likely is reflective of the broad experiences of pension plans over time. Similarly, two other experts cited evidence that, over the long-term, broad private equity fund returns did not outperform the stock market, and one of these experts reported that lower performance may be attributable to the typically riskier equities held in these investments. A representative from one plan, for example, remarked that the plan’s venture capital investments did not perform well. In this particular case involving the biotech industry, the representative noted that this was less a direct result of the financial crisis and more a function of the decline in this industry as a whole. Representatives from one large plan told us that venture capital investment performance had been problematic for them in the last 10 years. Similarly, we found that a number of plans we interviewed had lowered or eliminated their venture capital investment in recent years. Pension plan representatives we contacted experienced some challenges in hedge fund and private equity investing beyond those of more traditional investing, including limited liquidity and transparency, and the negative impact of the actions of other investors in the fund—sometimes referred to as co-investors. A number of plan representatives we interviewed experienced challenges with investment liquidity—a plan’s limited ability to redeem investment shares on demand—in order to meet plan obligations. Although hedge funds typically have limitations on the timing and magnitude of investor redemptions, a few plan representatives we contacted were surprised and financially harmed by “discretionary gates”—limitations on redemptions imposed at a fund manager’s discretion. For example, a representative from one large plan told us that some hedge fund managers imposed discretionary gates based on what was best for the fund’s business model and not what was in the best interests of the investors. This representative was concerned that hedge fund managers lacked incentives to seek returns and were focused on gathering assets, locking them up, and collecting the fees. Public documents from this plan noted the possibility that a hedge fund manager can earn tens of millions of dollars in performance fees in 1 year and then experience sizable losses in another, resulting in only a minimal capital gain or even net loss for the investor, but sizable profits for the fund manager at the end of the partnership.intended to use hedge fund redemptions to pay for plan obligations, unexpected discretionary gates forced them to instead sell public equities at a significant loss. Specifically, representatives from one plan told us that when the market was down more than 30 percent, they were unable to access their hedge fund investments due to gates imposed by the fund manager after other co-investors began liquidating their holdings. Representatives from this plan told us they were then compelled to sell public equities at a price well below their assessment of the equities’ intrinsic value, in order to meet plan obligations, including benefit payments to plan participants. Also, because plan representatives from at least one plan Some plans also faced challenges meeting requests for committed capital—money they have committed to the fund manager for investment—from private equity fund managers. A few plan representatives relied on a “self funding” private equity program in which private equity investment proceeds are sufficient to pay for a portion or all of the program’s committed capital. However, in some cases, the severe market decline during this period limited investment proceeds. Consequently, a few plans had to look for liquidity in their portfolio in order to fund capital commitments. While the plan representatives we spoke with were able to meet these financial commitments, a number of plans said they limited new private equity investment during this period. A small number of plans we interviewed noted challenges with hedge fund transparency during this period. One plan representative we interviewed invested in a fund of hedge funds with very limited transparency, but that promised access to certain high-quality hedge funds. As transparency improved after the 2008-2009 financial crisis, the plan sponsor learned that the various funds of funds had considerable overlapping investments, which likely amplified the funds’ of funds negative performance. A few plan representatives were unpleasantly surprised by the extent to which their plans’ hedge funds were invested in “side pockets”—separate side accounts holding illiquid investments, such as private equity or real estate. For example, representatives from one plan told us they were not fully aware of the way some of their funds were invested in these side pockets and consequently were surprised by the illiquidity of the investment. A representative from another plan was similarly surprised by how embedded some of their hedge fund investments were with side pockets, which proved problematic when the plan looked to these hedge fund investments for liquidity during the financial crisis and it was not available. Representatives of another plan expressed an aversion to such side pocket investments and preferred to invest in private equity directly rather than doing so unbeknownst to them through a hedge fund manager. A few of the plan representatives noted challenges related to co- investors’ actions. Under commingled investments arrangements, each investor owns a certain number of shares in a fund. During the recent financial crisis, the significance of these arrangements became particularly challenging for a few plan representatives. For example, representatives from one plan reported that while they were able to meet all of their private equity capital calls—a request from the fund manager for the investors to provide a portion of the money they have committed to investing—they were concerned about the ability of other co-investors to do so. In response to these concerns, the representatives felt compelled to take the time to call each of their fund managers to confirm the ability of all the investors to meet their financial commitments. Representatives of another plan noted that the actions of co-investors can impact an investment strategy, which may ultimately impact returns. For example, representatives of this plan said they had invested with a private equity fund manager who was implementing a strategy involving an investment in 10, $1 billion companies. However, because not all investors could meet their financial commitments, the fund manager had to restructure the investment strategy. The plan representatives were troubled by the strategy changes—involving investments in different companies—the fund manager had to make as a result. At least one plan representative also indicated that an onslaught of hedge fund redemptions by other co-investors damaged their investments. For example, representatives of one plan told us that many of their co- investors, alarmed by large losses during the financial crisis, moved quickly to cash out investments. Because co-investor redemptions led to further fund losses, plan representatives felt it was necessary to cash out as well. However, they were unable to do so, because the fund manager had imposed a discretionary gate to prevent further losses. Available data reveal that plan investments in hedge funds and private equity have continued to increase, and our contacts with 22 public and private defined benefit (DB) plan sponsors also reveal a continued commitment to these investment vehicles. Nonetheless, some plans have reduced their allocations or made significant changes to their strategic approach as a result of experiences in recent years. In addition, plan representatives we contacted took significant steps to improve the terms of their investments, including negotiating lower fees or more advantageous fee terms, and obtaining greater liquidity or transparency. Not all plans may be able to make such improvements, however. Available data and discussions with plan representatives indicate that DB plans have continued to invest in hedge funds and private equity in recent years. The percentage of large plans investing in both hedge fund and private equity has increased since the onset of the 2008 financial crisis. According to a Pensions & Investments survey, the percentage of large plans (as measured by total plan assets) investing in hedge funds grew from 47 percent in 2007 to 60 percent in 2010 (see fig. 2). Over the same time period, the percentage of large plans that invested in private equity also grew—from 80 percent to 92 percent. For both hedge funds and private equity, as figure 2 shows, these trends are a continuation of a decade-long upward trend. Data from the same survey reveal that investments in hedge funds and private equity typically constitute a small share of plan assets. The average allocation of portfolio assets to hedge funds among plans with such investments was a little over 5 percent in 2010. Similarly, among plans with investments in private equity, the average allocation of portfolio assets was a little over 9 percent. We reported in 2008 that available survey data showed larger plans were more likely to invest in hedge funds and private equity than midsize plans and, according to a survey by Greenwich Associates, that seemed to be the case in 2010 as well. The survey found that 22 percent of midsize plans—those with $250 million to $500 million in total assets—were invested in hedge funds compared with 40 percent of the largest plans— those with over $5 billion in total assets (see fig. 3). Survey data on plans with less than $200 million in assets are unavailable, so the extent to which these smaller plans invest in hedge funds and private equity is unclear. Comments made to us by representatives of selected plan sponsors generally paralleled these national data. Of the 18 plans participating in our review that had invested in hedge funds, 17 told us they had either maintained or increased their allocations since our original contact in 2007 or 2008. For example, one public plan that already had invested a substantial percentage of its assets in hedge funds increased its investments by about another 10 percent of the total portfolio. Representatives of this plan explained that hedge fund investments, while not immune to stock market declines, had nonetheless performed much better than stocks during the financial crisis. Similarly, of the 22 plans participating in our review that had invested in private equity at the time of our original contact in 2007-2008, 19 told us that they had either maintained or increased their target allocation. Each of the 10 plans that had increased their allocations also cited positive performance returns. For example, one plan representative explained that the allocation to private equity had increased even though the overall allocation to publicly traded stocks has decreased. The representative explained that the plan was lowering its allocation to stocks as part of a broad risk reduction strategy, and that the additional return expected from private equity would therefore be essential. As the representative explained, this change was made with the belief that the increase in private equity will produce relatively high risk-adjusted returns and will therefore compensate for the lower expected yield resulting from the shift out of publicly traded stocks to bonds. Experiences of recent years have led most plans we contacted to make significant changes to their hedge fund or private equity strategies, and in three cases, reductions in the overall allocation to hedge funds or private equity. For example, representatives of the one plan participating in our review that had reduced its overall allocations to hedge funds said that the plan’s poor experience with hedge funds was tied to illiquidity. These representatives explained that they had expected that their hedge fund investments would not be difficult to cash in when they needed to pay obligations, but they were prevented from doing so by discretionary gates imposed by the fund manager. As a result, the plan was forced to sell stocks during the crisis when values were depressed, resulting in significant losses. Several plans also discontinued or reduced the use of certain hedge fund strategies. For example, representatives of three plans told us that they had discontinued so-called “portable alpha” strategies, which commonly use hedge funds to help achieve returns that exceed those of the public equities market. According to industry press, this technique largely fell out of favor as a result of substantial investment losses during the 2008-2009 financial crisis. However, plan representatives indicated that disenchantment with the portable alpha technique did not necessarily mean abandonment of hedge funds generally. For example, after one of these three plans discontinued the portable alpha strategy, it opted to retain the hedge fund portion of the portable alpha investment. Several other plans indicated that they invested in less aggressive hedge fund strategies. For example, a representative of one plan explained that the plan had shifted from hedge funds designed to deliver investment returns that exceed the overall stock market to strategies that will deliver returns comparable to the stock market but with less risk. In contrast to the general trend toward greater investments in hedge funds, some plans eliminated or substantially reduced their use of funds of hedge funds. Representatives of one plan explained that this step was part of a planned evolution—the plan had invested in funds of funds as a first step, and planned on using its relationships with funds of funds managers to develop the expertise to make direct hedge fund investments. By 2011, this plan had accomplished that objective, and 80 percent of its hedge fund investments were direct hedge fund investments. Another plan, however, discontinued funds of funds investments, concluding that funds of funds added an unnecessary layer of fees, offered the plan little opportunity to influence fees of underlying hedge funds, limited the plan’s ability to conduct manager due diligence, and led to some overlapping investments in underlying individual hedge funds. A representative of this plan told us that one of the funds of funds had emphasized its unique access to top tier hedge funds, and the plan sponsor later learned that some of its other funds of funds were invested in the same vehicle, diminishing the diversification benefits of the fund of funds. However, funds of funds may be necessary for smaller pension plans and plans that lack well-developed internal investment and risk management that wish to invest in alternatives such as hedge funds and private equity. Several plans indicated that they have adjusted their private equity strategies in recent years. For example, representatives of several plans noted that as a result of the experiences during the financial crisis, they preferred investing in private equity buyout funds that rely more on the implementation of operational improvements in portfolio companies, rather than funds that rely on so-called financial engineering—using leveraging techniques to enhance the value of the stock. One plan representative explained that many private equity firms using financial engineering techniques had suffered severe losses during the financial crisis. As a result, this representative said the plan now prefers private equity funds that add value to portfolio companies through means such as better control of costs, improved marketing, and a more efficient distribution chain. Also, because of the diminished returns of venture capital funds in recent years, representatives of several plans said they have reduced investments in such funds. Finally, several of the plans we contacted had made relatively short term, opportunistic investments in distressed debt as a result of the financial crisis. One plan representative explained that the financial crisis gave rise to this opportunity because distressed debt oriented funds tend to perform well in bad economic times as the universe of troubled companies grows and other investors become more risk-averse. Steps plan sponsors have taken to obtain more advantageous terms when investing in hedge funds and private equity include lower fees, greater control and transparency, and changed liquidity terms. More advantageous fee terms. A little more than half of the plans included in our review have taken steps to obtain more advantageous fee terms for both hedge fund and private equity investments. For example, as part of a broad policy change regarding its relationship with hedge fund managers, one large public plan has determined that it will seek to avoid investing in hedge funds that insist on the traditional “2 and 20” fee structure, under which investors pay an annual management fee of 2 percent of assets under management, and a performance fee of 20 percent of profits. Instead, the plan will seek to limit both management and performance fees and ensure that performance fees are paid not on an annual basis, but for more sustained, long-term performance. Representatives of another plan explained that they had obtained lower fees in exchange for trade-offs related to other aspects of investment terms. Specifically, for some hedge fund investments, this plan pays a flat fee of 1.5 percent of assets under management, instead of the formerly standard 2 percent fee. In exchange, the plan opted to sacrifice liquidity by agreeing to a 2- year lockup of its investment, thus providing the fund manager with greater assurance that its capital and investment strategies would not be disrupted. While illiquidity by itself may be perceived as a disadvantage to an investor, this plan believed less liquidity was a worthwhile trade-off for lower fees. Principles developed by private equity investors. Large pension plans and other institutional investors in private equity have, through the Institutional Limited Partners Association (ILPA), taken significant steps to promote more advantageous terms of investment, including lower fees and better fee terms. The ILPA Private Equity Principles address in some detail how fees should be aligned to the interests of investors. For example, ILPA principles advocate a fee arrangement that would help ensure that investors get back all invested capital, plus a specified return on investment as soon as these returns are available. Sometimes referred to as a “European waterfall”, this arrangement dictates that investors recover their full initial investment plus a specified return on investment— such as an annualized 8 percent—before the fund manager obtains any share of the profits.waterfall”, under which the fund manager may collect profits corresponding to the sale of individual portfolio companies on a “deal by deal” basis, regardless of whether investors have obtained any return on their total investment in the fund. The overall advantage of the European waterfall for investors is that they can recapture their initial invested capital plus a specified return, as soon as that return exists, taking into account any losses. Further, because the fund manager does not obtain a share of the profits until after the investors have received the specified return, the need for reclamations of disbursements that have been made to the fund manager are minimized. Such reclamations—commonly referred to as “clawbacks”—may be necessary if profits paid to the fund manager based on the sale of portfolio companies early in the life of a fund are negated by subsequent losses. The ILPA Principles also address other issues, including notification of management changes and the fund management’s financial stake in the fund. This arrangement contrasts with an “American Enhanced transparency, control, and liquidity through separate accounts. Many of the plans we contacted told us that some of the challenges of hedge fund investing could be addressed though the use of separate accounts in place of commingled funds. Under a commingled hedge fund arrangement, the investor owns a certain number of shares in the fund, but the hedge fund manager determines what assets to invest in, and the partnership collectively owns the underlying assets (see fig. 4). In contrast, under a separate account, the hedge fund manager essentially serves as a consultant who manages the assets in a way that generally parallels the hedge fund itself, but the investor may specify investment guidelines that result in differences between the commingled hedge fund and separate account. Plan representatives and financial industry experts cited multiple benefits of separate accounts, including (1) precise knowledge of the nature of underlying assets, (2) ability to exclude certain assets in the commingled hedge fund from its share of the rest of the hedge funds assets, and (3) much greater liquidity because plan sponsors own and can sell the underlying assets at will. Separate account arrangements are, however, more costly than commingled funds, and hedge fund managers generally will not offer such arrangements unless the size of an investment exceeds a certain threshold. Other steps. Plan sponsor representatives also mentioned other steps they took to address difficulties of the last several years. Some plans now seek specific contractual terms that affect liquidity or other aspects of the investment. For example, representatives of one plan explained that they now seek investor level gates, under which cash-out limitations would be triggered once an investor has liquidated more than a specified amount of their investment. Other co-investors would not be affected and could still cash out under the normal terms of the hedge fund. Other plans have established certain criteria for selecting hedge fund or private equity funds. A representative of one plan, for example, said that the plan avoids hedge funds that have so-called side pockets—illiquid investments held separate from the primary fund—such as a hedge fund that has an investment in a private equity fund. A representative of one plan, which had been surprised by the existence of such side-pocket illiquid investments, noted that such investments can exacerbate illiquidity during stressful times. A representative of another plan noted that the plan prefers to select its own private equity investments and avoid locking in to one of a hedge fund manager’s choosing. Finally, a few plans made changes to overall portfolio management practices as a result of experiences with hedge funds and private equity. For example, one plan established a larger cash reserve and representatives of two plans described steps to enhance or monitor liquidity. A few plan representatives and experts described other improvements to their selection or monitoring processes for hedge funds or private equity investments. For example, two plan sponsors said they are much more focused on how fund managers establish the value of invested shares. One plan representative noted that, in the past, the plan took valuations provided by the fund manager at face value, but they now examine valuations much more closely. Representatives of other plans said that, as a result of massive hedge fund cash-outs by other co-investors, they consider the nature of other co-investors before investing. One plan representative explained that he prefers investors who will ride out market volatility and not flee the fund during episodes of volatility. Several surveyed experts cited diligence improvements, including better operational due diligence. Some public plans have also taken significant steps to improve and oversee the process of selecting hedge funds, private equity, and other investments. For example, a special review undertaken by one large public plan we contacted found significant problems involving the role of placement agents and accompanying malfeasance by public officials, which significantly compromised the plan’s selection of private equity funds and other investment vehicles. Among other things, the report raised the possibility that some private equity investments had been based on a relationship with a placement agent, rather than on the quality of the investment. Consequently, potentially superior investments may have been bypassed in favor of those with better connections, and the fund ultimately paid excessive fees that bore little or no relationship to the services rendered by the placement agent. The report’s conclusions emphasized that plan officials must increase vigilance on those portions of the plan—such as hedge funds and private equity—that have not traditionally been subject to as great a degree of public scrutiny as other types of investments. The review also offered numerous recommendations designed to prevent a recurrence of these events, and the plan has taken some actions. For example, the plan has advocated, and the California state legislature has enacted, a state law that imposes on placement agents the same disclosure and registration requirements that apply to lobbyists, and obtained over $200 million in fee reductions and an agreement from elite money management firms to avoid using placement agents for new plan investments. Further, partly as a result of the review, the plan developed a comprehensive new policy designed to ensure that it had more advantageous terms of investment with its hedge fund managers. According to a representative of the National Association of State Retirement Administrators, other public plans have experienced similar problems and have made comparable reforms. Although some plans have taken significant steps to improve the terms of hedge fund and private equity investments in recent years, not all plans may be able to take such steps, and it is not clear how extensive such changes have been. For example, separate accounts may not be a practicable option for all plan sponsors. Separate accounts impose additional duties on hedge fund managers and, therefore, the fees associated with them are often somewhat higher. In addition, they impose additional burdens on the investor, such as ensuring that the management of the separate account matches that of the commingled fund. Further, according to plan sponsors and experts, hedge fund managers will establish and operate separate accounts only for investments of a certain magnitude; hedge fund managers may not establish separate accounts for investments of less than approximately $100 million. As a result, separate accounts would not be an option for plans unable to make an investment of this magnitude. Although our survey of experts identified some of the same actions that plan representatives described, the narrative responses revealed no clear pattern or consensus regarding these actions. Further, plan representatives and some experts indicated that not all plans would be able to take the steps described above. For example, plans’ ability to obtain better fee terms is not universal. One plan representative noted that his plan is not large enough to have much negotiation power with fund managers, and the plan generally accepts the manager’s standard fee structure. Another plan representative noted that the top fund managers have not had to adjust fees. Also, with regard to due diligence steps, some surveyed experts indicated that difficulties are likely to be among smaller plans or plans with lesser resources. For example, one respondent stated that while the use of best practices is becoming more widespread, failure to observe them occurs among smaller funds that lack resources or plans that are influenced by a salesperson. Finally, it is not clear whether some of the changes in recent years will permanently change the landscape. One of the leading plan consultants noted that, since the financial crisis, plans have gained significant bargaining power with hedge fund managers who desire plan investments. However, representatives of two plans also indicated that this development may be cyclical, and an outgrowth of the troubled financial markets in recent years. These representatives also speculated that, when financial markets heat up again, the environment may change to a “seller’s” market, and fund managers may be able to reassert fee structures and other investment terms that are less advantageous to investors. Various entities have developed guidance applicable to plan investments in hedge fund and private equity, ranging from broadly applicable guidance issued by Labor to detailed guidance issued by federal advisory and industry bodies. While Labor has not developed guidance specifically addressing hedge funds or private equity, departmental officials cited a 1996 information letter from Labor to the Comptroller of the Currency that discusses the application of ERISA principles regarding the use of alternative investments. The letter does not refer to hedge funds or private equity, but departmental officials said that its basic principles could be applied to these types of investments. The letter addresses pension plans’ use of derivatives in their investment portfolios and states that investments in derivatives are subject to ERISA fiduciary responsibility rules, just as any other investment. In light of this, the letter emphasizes several key considerations, including Sophistication. Such investments may require more sophistication and a deeper understanding on the part of fiduciaries than other investments. Adequate information. Fiduciaries are responsible for obtaining sufficient information to understand such investments and, if the investment is in a pooled fund managed by another entity, the fiduciary should obtain sufficient information to determine the nature of the pooled fund’s uses of derivatives. Understanding of investment risk. The market risks of these investments should be understood and evaluated in terms of, among other considerations, the effect they have on the portfolio’s overall risk. Understanding operational and legal risk. The fiduciary must determine whether it has adequate information and risk management systems in place given the nature, size, and complexity of the investment, and must ensure proper documentation of a derivative transaction. While Labor has issued this general guidance applying to investments in derivatives, other organizations have published guidance specifically encompassing or targeted at hedge fund and private equity. In December 2011, the Organisation for Economic Cooperation and Development (OECD) and the International Organisation of Pension Supervisors (IOPS) published a set of good practices for pension plans’ use of alternative investments, including hedge funds and private equity. Based on a survey of OECD and IOPS members, this document offers recommended good practices on issues such as investment policy, risk management, and contractual terms, as well as best practices for pension fund regulators. In 2009, the President’s Working Group on Financial Markets issued a report detailing important considerations and best practices for hedge fund investors, including specific guidance for fiduciaries. This document provides basic background information about hedge funds, distinguishes them from more traditional investments, and outlines some of the basic considerations a fiduciary should make in the earliest stages of considering a hedge fund investment. The document also provides extensive guidance and suggestions for best practices related to due diligence steps, risk management, and various challenges involved in hedge fund investing, including valuation, fees and expenses, and legal and regulatory considerations, among other issues. Similarly, the Greenwich Roundtable, a nonprofit research and educational organization for investors in alternative assets, has issued a document that outlines due diligence best practices for alternative investments, including hedge fund and private equity investments. This document describes basic considerations in the process of considering any alternative investment, and it separately provides in-depth guidance on specific steps that should be taken in making hedge fund, private equity, and other illiquid investments. In addition to these guidance documents, other organizations have published briefer guidance documents. In 2008, the Government Finance Officers Association published a brief advisory on the use of alternative assets by public employee retirement systems. This three-page document presents a condensed explanation of the risks inherent in investing in hedge funds, private equity, and other alternative assets. It also highlights key due diligence considerations and recommends that state and local governments use extreme prudence in making such investments. More recently, the ILPA published a set of principles aimed at defined benefit pension plans and other institutional investors in private equity. This document details important aspects of the terms of investment between fund managers and investors, and best practices that fund managers and investors should observe during the course of the investment relationship. The Advisory Council on Employee Welfare and Pension Benefit Plans, commonly referred to as the ERISA Advisory Council, was created by ERISA to provide advice to the Secretary of Labor. 29 U.S.C. § 1142. and matters for consideration in their adoption for use by qualified plans. While the council concluded that hedge funds may be an acceptable form of investment, its report noted that certain aspects of hedge fund investments should be brought to the forefront in educating plan fiduciaries and others. Among these are investment styles, liquidity issues, and potential conflicts of interest. In 2008, the council reviewed hard-to-value assets, which can include hedge funds, private equity, and other alternative assets. As a result of related hearings and deliberations, the council recommended that Labor issue guidance addressing the complex nature and distinct characteristics of such assets. The council further specified that the guidance should define hard-to- value assets and describe ERISA obligations when selecting, valuing, accounting for, monitoring, and reporting on these assets. To date, Labor has implemented neither our recommendations nor the council’s recommendations. In responding to our 2008 recommendation, Labor noted that while it would consider the recommendation, the lack of uniformity among hedge funds and private equity funds could make development of comprehensive and useful guidance difficult. In 2011, the ERISA Advisory Council specifically revisited the issue of pension plans’ investments in hedge funds and private equity. The 2011 sessions of the council’s hearings prominently considered the potential role of hedge fund and private equity investments in retirement plans. The council’s report has not yet been published, but according to a Labor official, publication is expected in early 2012. Plans and their hedge fund and private equity investments have not been immune to the effects of the financial market turbulence in recent years. Despite significant losses, however, DB plan sponsors and experts we contacted generally indicated that these alternative assets had met expectations and still had a significant role to play in the plans’ investment portfolios. Data from surveys of public and private plans clearly indicate that the appetite for such investments is continuing to grow. Nonetheless, the events of the last 4 years have reinforced our 2008 observation that hedge funds and private equity also pose risks and challenges beyond those posed by more traditional investments. Representatives of some of the plans that we contacted indicated that hedge fund investments were less resilient than expected. As a result of poor performance or other issues related to hedge funds and private equity, some plans have taken significant steps to adjust the nature or terms of such investments. These steps will likely benefit the plans and, therefore, the plan participants and beneficiaries, in coming years. Although some plans have taken significant actions, it is not clear how extensive such changes have been and whether such changes would be practical for those DB plans that lack both the resources and the negotiating power available to other plans. Our selection of 22 DB plans included some of the largest retirement plans in the nation, some of which manage tens of billions of dollars. Yet despite their size and expertise, some of these plans encountered significant difficulties with their alternative investments in recent years, resulting in substantial adjustments to plan investment practices. It is worth asking, if such large, sophisticated institutions can have difficulties that result in significant changes in the nature or terms of their investment in these alternative asset classes, how much more difficult it might be for medium and smaller plans. In 2008, we recommended that the Secretary of Labor issue guidance designed for qualified plans under ERISA concerning alternative investment practices. We specifically called for guidance that would (1) outline the unique challenges of investing in hedge funds and private equity; (2) describe steps that plans should take to address these challenges and help meet ERISA requirements; and (3) explain the implications of these challenges and steps for smaller plans. We still believe that providing such guidance would be beneficial. In fact, in light of the guidance documents issued by other national and international organizations in the intervening years, this task might now prove easier for Labor than it would have been 4 years ago. Such guidance still has the potential to help plan sponsors, and our work suggests a continued need for such assistance. We provided a draft of this report to the Department of Labor, Department of the Treasury, the Pension Benefit Guaranty Corporation, and the Securities and Exchange Commission (SEC) for review and comment. Labor, the Department of the Treasury, and SEC provided technical comments which we incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of the report to the appropriate congressional committees, the Secretary of Labor, Secretary of the Treasury, the Director of the Pension Benefit Guaranty Corporation, Director of the SEC, and other interested parties. This report will also available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact Charles Jeszeck at (202) 512-7215 or [email protected] . Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors are listed in appendix II. Our objectives were to answer the following research questions: What is known about the experiences of defined benefit pension plans with investments in hedge funds and private equity, including recent lessons learned? How have plan sponsors responded to lessons learned from recent experiences with such alternative investments? What steps have federal agencies and other entities taken to help plan sponsors make and manage investments in such alternative assets, and what additional steps might be warranted? To answer all of the research questions, we conducted in-depth interviews with plan representatives of the private and public sector pension plans that were selected for our 2008 report examining the extent to which pension plans invest in hedge funds and private equity. While 26 plans were interviewed for the 2008 report, 22 plans participated in follow-up interviews for our report (see table 3 for a list of plan officials we interviewed). We conducted interviews with representatives from June 2011 to September 2011 and, we obtained and reviewed available supporting documentation. These interviews were conducted using a semistructured interview format, which included open-ended questions on the following topics, asked separately about each plan’s hedge funds or private equity investments: history of investment in hedge funds or private equity; experiences with these investments to date; lessons learned with these investments; changes made to address these lessons, including due diligence and ongoing monitoring; and actions federal agencies, such as Labor, should take to ensure that pension plan fiduciaries better make and manage their hedge fund and private equity investments. Four of the plans, who did not invest in hedge funds when we interviewed them for our 2008 report, were included in our in-depth interviews to determine whether plan representatives subsequently invested in hedge funds and to determine their experience given that decision. The results of the plan sponsor interviews were limited by plan representatives’ willingness to speak with us. The plans we interviewed were selected based on several criteria identified in our 2008 report. Specifically, when these plans were selected for our prior report, we attempted to select plans that varied in the size of allocations to hedge funds and private equity as a share of total plan assets. We also attempted to select plans with a range of total plan assets, as outlined in table 4. We identified these plans using data from the 2006 Pensions & Investments survey of the largest 200 pension plans and through our interviews with industry experts. While we selected plans representing a range of total plan assets and varying size of allocations to hedge funds and private equity as a share of total plan assets, these plan representatives’ responses do not represent a statistically generalizeable sample of all pension plans. To further address the research questions, we surveyed a selected group of 20 experts in the areas of pension plan hedge fund and private equity investment. We asked these experts five questions related to performance and management of these funds during the past 5 years and also requested suggestions, if any, for regulatory improvements. Specifically, we asked how pension plans’ hedge fund and private equity investments have performed; lessons learned with respect to pension plan hedge fund and private equity investments; changes to pension plan hedge fund and private equity investment practices; the extent to which pension plans observe best practices in hedge fund and private equity due diligence; and actions federal agencies, such as Labor, should take to ensure that pension plan fiduciaries better make and manage their hedge fund and private equity investments. We used a Web-based form to collect responses. This group of experts was selected from a number of sources, including experts from our 2010 GAO Retirement Security Advisory Panel, referrals from interviews and other experts, and recommendations from GAO subject matter experts. To ensure we had a range of views we invited participants from several different backgrounds to participate in our survey including academics, representatives of public and private plan sponsors, representatives of plan participants, pension consultant groups, and other key national organizations and subject matter experts. Of the 20 experts who agreed to participate in the survey, 19 completed the questionnaire within the requested time frame. The survey was conducted in August 2011. To quantitatively address national hedge fund and private equity investment performance for the first question, we obtained and reviewed broad industry performance data from two private organizations, Cambridge Associates LLC and Hedge Fund Research, Inc. Data from these organizations captured historical hedge fund and private equity investment performance, including performance at the peak of the financial crisis. We used these data to determine broad hedge fund and private equity performance over the last 5 years. While the data from each of these organizations are limited in some ways, we conducted data reliability assessments for each data source and determined that the data were sufficiently reliable for purposes of this study. Data from these organizations are not specific to pension plan hedge fund and private equity investments, which may have different investment performance due to specific investment terms and industry access. Moreover, because these data were from broad investment indexes, they neither illustrated differences in performance for various investment strategies within hedge fund and private equity investments, nor did they distinguish performance of fund of funds investment. While the most informative way to assess how well investments have performed is to analyze actual portfolio investment data, we were unable to quantitatively analyze specifically how pension plans’ investments in hedge funds and private equity have performed over the past 5 years. We attempted to obtain detailed investment performance data from selected custodian banks and investment consulting firms. These two groups have data on the largest pension plan investments in the country. However, because of the proprietary nature and considerable cost, both in resources and expense, we were not able to conduct this analysis. To address the second question, we obtained and analyzed survey data of private and public sector defined benefit plans on the extent of plan investments in hedge funds and private equity from two private organizations, Greenwich Associates and Pensions & Investments. We identified these two surveys from prior work and obtained updated 2010 data. As seen in table 5, the surveys varied in the number and size of plans surveyed. Using available survey data, we determined the percentage of plans surveyed that reported investments in hedge funds or private equity. Using data from Greenwich Associates, we also determined the percentage of surveyed plans that invested in hedge funds or private equity by category of plan size, measured by total plan assets. We further examined data from each survey on the size of allocations to hedge funds or private equity as a share of total plan assets. Using the Pensions & Investments data, we analyzed allocations to these investments for individual plans and calculated the average allocation for hedge funds and private equity, separately, among all plans surveyed that reported these investments. The Greenwich Associates data reported the size of allocations to hedge funds or private equity as an average for all plans surveyed. While the information collected by each of the surveys is limited in some ways, we conducted a data reliability assessment of each survey and determined that the data were sufficiently reliable for purposes of this study. These surveys did not specifically define the terms hedge fund and private equity; rather, respondents reported allocations based on their own classifications. Data from both surveys are reflective only of the plans surveyed and cannot be generalized to all plans. To address the third question, we first reviewed relevant literature and spoke with federal officials from relevant agencies, including the Labor, the SEC, and the Pension Benefit Guaranty Corporation to understand federal agency action to date. In addition, we interviewed key national organizations and pension industry experts to understand the perspective of plan officials and their participants regarding federal actions to date, as well as then need for additional federal action. Key national organizations included representatives from organizations that represent plan participants, such as AARP, and an organization that represents plan officials, the American Benefits Council. In addition, we interviewed academic and national experts in the pension and alternative investment area and pension plan consultants. We also attended and participated in Labor’s ERISA Advisory Council 2011 hearings on pension plan investments in private equity and hedge funds, including the use of these investments in defined contribution plans. We reviewed and analyzed the detailed information collected through the literature review, discussions, and hearings to determine actions taken to date by federal agencies and other entities to help plan sponsors make and manage hedge fund and private equity investments. We conducted this performance audit from February 2011 to February 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. David Lehrer, Assistant Director, and Michael Hartnett, Analyst-in- Charge, managed this review. Amber Yancey-Carroll also led portions of the research and made significant contributions to all portions of this report. Kathleen van Gelder helped develop the structure of the report, and Luann Moy provided methodological assistance. Sheila McCoy and Roger Thomas provided legal assistance. Ashley McCall assisted in identifying relevant literature. James Bennett developed the report’s graphics. Caitlin Croake and Lauren Gilbertson verified our findings.
Millions of Americans rely on defined benefit pension plans for their financial well-being in retirement. Plan representatives are increasingly investing in a wide range of assets, including hedge funds and private equity funds. In recent years, GAO has noted that plans may face significant challenges and risks when investing in these alternative assets. These challenges and ongoing market volatility have raised concerns about how these investments have performed since 2008. As requested, to better understand plan sponsors’ experiences with these investments, GAO examined (1) the recent experiences of pension plans with investments in hedge funds and private equity, including lessons learned; (2) how plans have responded to these lessons; and (3) steps federal agencies and other entities have taken to help plan sponsors make and manage these alternative investments. To answer these questions, GAO analyzed available data; interviewed relevant federal agencies and industry experts; conducted follow-up interviews with 22 public and private pension plan sponsors selected among the top 200 plans and contacted in the course of GAO’s prior related work; and surveyed 20 plan consultants, academic experts and other industry experts. This report reemphasizes a 2008 recommendation that the Secretary of Labor provide guidance to help plans investing in hedge funds and private equity. While plan representatives GAO contacted generally stated that their hedge fund and private equity investments met expectations in recent years, a number of plans experienced losses and other challenges, such as limited liquidity and transparency. National data indicated that hedge fund and private equity investments were significantly affected by the 2008-2009 financial crisis, and plans and experts GAO contacted indicated that pension plan investments were not insulated from losses. Most of the 22 plan representatives GAO interviewed said that their hedge fund investments met expectations overall, despite, in some cases, significant losses during the financial crisis. A few plan representatives, however, expected hedge fund investments to be much more resilient in turbulent markets, and found the losses disappointing. Given the long-term nature of private equity investments, almost all of the representatives were generally satisfied with these investments over the last 5 years. Some plan representatives described significant difficulties in hedge fund and private equity investing related to limited liquidity and transparency, and the negative impact of the actions of other investors in the fund—sometimes referred to as co-investors. For example, representatives from one plan reported they were unable to cash out of their hedge fund investments due to discretionary withdrawal restrictions imposed by the fund manager, requiring them to sell some of their stock holdings at a severe loss in order to pay plan benefits. Most plans included in our review have taken actions to address challenges related to their hedge fund and private equity investments, including allocation reductions, modifications of investment terms, and improvements to the fund selection and monitoring process. National data reveal that plans have continued to invest in hedge funds and private equity—for example, one survey revealed that the percentage of large plans investing in hedge funds grew from 47 percent in 2007 to 60 percent in 2010—and most plans GAO contacted have also maintained or increased their allocations to these investments. However, most plans have adjusted investment strategies as a result of recent years’ experiences. For example, three plans have reduced their allocations to hedge funds or private equity. Other plan representatives also took steps to improve investment terms, including more favorable fee structures and enhanced liquidity. However, some plan representatives and experts indicated that smaller plans would likely not be able to take some of these steps. The Department of Labor has provided some guidance to plans regarding investing in derivatives, but has not taken any steps specifically related to hedge fund and private equity investments. In recent years, however, other entities have addressed this issue. For example, in 2009, the President’s Working Group on Financial Markets issued best practices for hedge fund investors. Further, both GAO and a Department of Labor advisory body have recommended that the department publish guidance for plans that invest in such alternative assets. To date, it has not done so, in part because of a concern that the lack of uniformity among such investments could make development of useful guidance difficult. In 2011, the Department of Labor advisory body specifically revisited the issue of pension plans’ investments in hedge funds and private equity, and a report is expected in early 2012.
The Competition in Contracting Act of 1984 requires agencies to obtain full and open competition through the use of competitive procedures in their procurement activities unless otherwise authorized by law. Using competitive procedures to award contracts means that all responsible contractors are permitted to submit offers. The FAR generally requires agencies to perform acquisition planning and conduct market research to promote full and open competition. Generally, noncompetitive awards must be supported by written justifications that address the specific exception to full and open competition that is being used in the procurement. In addition, federal agencies can establish IDIQ contracts with one or more contractors and may issue orders under these contracts. For multiple award IDIQ contracts, agencies are generally required by the FAR to provide all contractors with an IDIQ contract a fair opportunity to be considered for each order above certain dollar thresholds; however, agencies can award noncompetitive orders under certain circumstances, which generally require a written justification. The General Services Administration administers a program that uses IDIQ contracts with vendors for commercially available goods and services, and federal agencies place orders under the contracts. When doing so noncompetitively, the FAR requires procuring agencies to justify the need to restrict the number of vendors considered. Finally, agencies can also competitively award contracts after limiting the pool of available contractors—a process called full and open competition after exclusion of sources. For example, agencies are required by the FAR to set aside procurements for small businesses if there is a reasonable expectation that two or more responsible small businesses will compete for the work and will offer fair market prices. Justifications generally are to provide sufficient facts or the rationale to explain the use of the specific exception to competition. For example, under FAR part 6, justifications must include, at a minimum, 12 elements. Examples of these required elements include a description of the supplies or services required to meet the agency’s needs and their estimated value; identification of the statutory authority permitting other than full and open competition; a determination by the contracting officer that the anticipated cost to the government will be fair and reasonable; a description of market research conducted, if any; and a statement of the actions, if any, the agency may take to remove or overcome any barriers to competition before any subsequent acquisitions for the supplies or services required. Examples of allowable exceptions to full and open competition for DOD include circumstances when only one or a limited number of contractors are the only sources capable of performing the requirement or when an agency’s need is of such unusual and compelling urgency that the government would be seriously injured unless the agency is permitted to limit the number of sources. The FAR generally requires that justifications be published on the Federal Business Opportunities (FedBizOpps.gov) website and be approved at various levels within the contracting organization. These levels vary according to the dollar value of the procurement. Our prior work indicates that a long-standing factor impacting DOD’s competition rate is its reliance on original equipment manufacturers. Open systems architecture promotes competition by allowing components to be added, removed, modified, replaced, or maintained by multiple suppliers, not just the manufacturer that developed the system. An open system is designed with modular components each having its own functions. This design makes the system easier to develop, maintain, and modify because components can be changed without significantly impacting the remainder of the system. Likewise, our prior work states that incorporating open systems architecture and the acquisition of appropriate data rights, such as design drawings, specifications, and standards, during program development can result in greater competition and reduce costs during production. Further, incorporation of open systems architecture and management of data rights can lead to greater competition and reduced upgrade and repair costs over a program’s life cycle. But introducing this approach later in a program’s life cycle, such as for a planned modification or upgrade, is more difficult, complex, and costly to do as it may require significant modifications to an already-developed system. Defense systems can have a life span of 40 years; figure 1 shows that the greater part of a weapon system’s total ownership cost consists of its operating and support costs. Early decisions made during design dictate operating and support costs over the entire life cycle. DOD’s Better Buying Power initiative outlines a series of actions, guidance, and directives to achieve greater efficiencies, in part through the promotion of competition, such as the following: Each program must present a competitive strategy at each major decision point. Before starting system development, programs must have a business case analysis that outlines an approach for using open systems architecture and acquiring data rights to ensure sustained consideration of competition in the acquisition of weapons systems. Each DOD component is to develop a plan to improve the overall rate of competition by at least 2 percent per year, and the rate of effective competition—when more than one offer is received under competitive procedures—by at least 10 percent per year. Justification and approval documents for noncompetitive contracts should include a discussion on how the program will take advantage of business practices to break away from their reliance on a single vendor and improve competition in future acquisitions. Updated guidance and directives for open systems architecture and the acquisition and management of data rights. Developing new training and updated course curriculum on open systems architecture and acquisition and management of data rights. In addition, DOD has termed procurements for which only one offer was received under full and open competition as “ineffective competition.” The Office of Federal Procurement Policy (OFPP) noted that competitions that yield only one offer in response to a solicitation deprive agencies of the ability to consider alternative solutions in a reasoned and structured manner. In November 2010, DOD introduced a policy containing new requirements concerning one-offer awards, and codified it with changes in the DFARS in June 2012. See figure 2. Last year, we found that the one-offer requirements will likely have a limited impact on unnecessarily restrictive solicitation requirements because many solicitations provide initial response times of more than 30 days, so many awards are not subject to the program office consultation rule. We also found that the impact of recent guidance on the number and dollar value of one-offer awards is not quantifiable because of unreliable data. As a result, DOD is not in a position to accurately measure the impact of the one-offer requirements since it was implemented. We recommended that DOD develop an action plan for DOD components to collect reliable data on competitive procurements for which only one offer is received, so that the department can determine the effect of its requirements on one-offer awards. DOD agreed with our recommendation. In response, the Air Force established mandatory training for personnel responsible for entering this data in the system. According to agency officials, DOD is in the process of updating guidance on entering data for one-offer awards. DOD’s competition rate for all contract obligations had been declining since 2009; however, the competition rate has remained flat for the past 2 years. Among the DOD components in our study, the Army had the highest competition rate in fiscal year 2013, while MDA had the lowest. Based on FPDS-NG data, we found that noncompetitive awards cited several exceptions from competitive procedures. We continue to observe, as we previously found in 2012 and 2013, that there are a number of factors that affect DOD’s competition rate. For example, the government has historically relied on the original equipment manufacturers of weapon systems for future procurements of the system, including sustainment. Between fiscal years 2009 and 2013, DOD’s competition rate—based on all contract obligations—declined by 5 percent, from 62 percent to 57 percent, with an average competition rate of 59 percent for the 5 year period (see figure 3). However, the competition rate did not change from fiscal years 2012 to 2013, remaining at 57 percent. DOD’s total dollars obligated decreased by almost $53 billion, from $360.4 billion in fiscal year 2012 to $307.5 billion in fiscal year 2013. Competed obligations decreased by over $31 billion, from $205.6 billion in fiscal year 2012 to $174.2 billion in fiscal year 2013. We also found that the competition rate for all contract obligations varied by DOD component. Of the 4 organizations we reviewed—Air Force, Army, Navy, and MDA—in fiscal year 2013, the Army had the highest competition rate, 66 percent, whereas MDA had the lowest rate of competition, 29 percent, representing a significant decrease from the prior year. Figure 4 outlines competition rates by component for fiscal years 2009 through 2013. In fiscal year 2013, the Air Force’s competition rate improved to 41 percent. However, the Air Force reported that it operates in an environment where it obligates the majority of its dollars on long standing sole-source weapon system contracts, noncompetitive foreign military sales, and reduced number of new programs which affects their ability to compete. The Navy’s competition rate in fiscal year 2013 declined due to continued investments in the F-35 Joint Strike Fighter, P-8A Poseidon long-range maritime patrol aircraft, and carrier construction. The decline in MDA’s fiscal year 2013 competition rate is principally the result of a noncompetitive $2.7 billion foreign military sale. Last year, we found that DOD could gain greater insight into the competition rates if it considered the impact of foreign military sales when calculating the rates. When we calculated MDA’s competition rate without including foreign military sales, we found that the competition rate was 49 percent in both fiscal years 2012 and 2013. Slightly more than half of all DOD’s obligations in fiscal year 2013 were to purchase services, ($160.3 billion, or 52.1 percent), which were competed at a substantially higher rate than products. Specifically, the competition rate for services was 73 percent compared to 39 percent for products. This trend was generally consistent over the 5-year period from fiscal years 2009 through 2013. As shown in figure 5, historically, services have been procured at a higher competitive rate than products. In addition, in fiscal year 2013, non-research and development (R&D) services were competed at a higher competition rate than R&D services, 75 percent compared to 65 percent. The competition rate for non-R&D services at DOD declined from 81 percent in fiscal year 2009 to 75 percent in fiscal year 2013. Among the major components, the Air Force had the most significant decline, dropping from 66 percent to 47 percent. MDA increased its non-R&D services competition rate from 69 percent to 89 percent. The 10 largest product and service categories, as reported in FPDS-NG, cumulatively accounted for 31 percent of non-competed obligations in fiscal years 2009 through 2013. In fiscal year 2013, these 9 product and 1 service categories accounted for 38 percent of all non-competed obligations and comprised 16 percent of all DOD obligations. In fiscal year 2013, 10 percent of obligations for fixed wing aircraft procurements were made competitively (see table 1). Examples of fixed wing procurements include airframes or components for the F-35, C-5, F-22, and C-40B aircraft. Similarly, 4 percent of rotary wing and 3 percent of obligations for guided missiles were competed. For aircraft carriers, the Navy competition advocate explained that when a contract for a very large procurement like an aircraft carrier is awarded, the organization’s competition rate declines for that year because these types of procurements are made noncompetitively. Once DOD selects the contractor for a weapon system, such as an aircraft, truck, or missile, the government has historically relied on the original equipment manufacturers for future procurements of the system, including sustainment. The additional systems or sustainment are often procured through contract modifications or the exercise of contract options. This situation is partly attributable to the unique relationship that DOD has with the defense industry that differs from the commercial marketplace. The combination of a single buyer (DOD), few very large prime contractors in each segment of the industry, and a limited number of weapon programs constitutes a structure for doing business that is altogether different from a classic free market. For instance, there is less competition and once a contract is awarded, the contractor often remains the sole vendor capable of providing additional systems and sustainment. These long-term contractual relationships with weapon system contractors limit opportunities for competition. During the past 5 fiscal years, DOD used the “only one responsible source” exception for about 64 percent of all awards for new noncompetitive contracts and task orders on single award IDIQ contracts. The percent obligated on new noncompetitive contracts and task orders on single award contracts as reported in FPDS-NG under the “only one responsible source” exception has increased—from 66 percent in fiscal year 2009 to 72 percent in fiscal year 2013. The second largest amount (11 percent awarded in fiscal year 2013) cited the “authorized or required by statute” exception (see table 2). The individual components used the “only one responsible source” exception to varying extents—69 percent for the Air Force, 68 percent for Army, 80 percent for Navy, and 71 percent for other DOD agencies for fiscal year 2013. However, MDA used this exception for 96 percent or $137.4 million of the new noncompetitive contracts and task orders it awarded in fiscal year 2013. In fiscal year 2013, the majority of new noncompetitive task orders issued under multiple award IDIQ contracts and subject to the fair opportunity process reported two exceptions to the fair opportunity process. Specifically, “only one source” was cited for 44 percent of obligations ($1.2 billion) and “follow-on actions,” orders for the same good or service with the original vendor, was cited for 39 percent ($1.1 billion). In general, the documentation for our selected contracts contained the required elements in accordance with regulations. Specifically, 11 of the 14 justifications in our sample contained all the required elements. However, our sample also included three justifications that were not prepared correctly. Further, four justifications were not made publicly available according to requirements, thus missing an opportunity to add transparency into the contracting process. Half of the justifications in our sample explained that the lack of necessary data rights was a barrier to competition. In some cases the justifications provided insight as to how a lack of the right level of data rights resulted in complete reliance on a single vendor over time. As required by the FAR, DOD contracting officials prepared written justifications for all 14 noncompetitive contract awards in our sample. We determined that 11 of the 14 justifications contained all required elements and were prepared in accordance with the FAR. For additional details about the noncompetitive awards in our sample, see appendix II. Further, we found that the justifications generally provided clear explanations of the reasons that the procurement could not be competed. Documenting this information provides insight into why acquisitions were not competitive and enables agencies to use that knowledge to help remove obstacles to competition in future acquisitions. For example, three justifications we reviewed described steps DOD was taking to improve competition in the future. Three justifications were not prepared in full accordance with regulations. One Navy justification prepared for an $882,000 contract for helicopter support services did not have the signature of the competition advocate. In addition, the justification was missing the required information about market research conducted and a list of sources that expressed an interest in the acquisition. However, the Navy provided a separate market research memorandum which explained that the government of the country where the helicopter services were needed directed which company to use. A Navy justification for a $7.2 million award for software engineering services did not include the contracting officer’s signature certifying that the justification was complete and accurate because the signature block was erroneously removed from the document. The third justification did not use the correct legal citation for the exception to competition and instead referenced an exception that was not supported by the facts provided in the justification. The FAR requires that justifications be made publicly available, generally within 14 days of contract award, which increases transparency into the contracting process by providing the opportunity for public review of justifications for noncompetitive contracts. In our sample, five justifications were made publicly available within the time frame required by the FAR and another justification was exempt from requirements due to national security concerns. However, four justifications were never made publicly available and four justifications were not made available until after the required time frame. DOD acknowledged these as oversights. All 14 noncompetitive contracts and task orders within our sample were justified under the exceptions for competition of “only one responsible source or “only one source capable.” For half of these awards, the basis for this exception was the agency’s lack of data rights. All 7 of these justifications or supporting documents described situations, ranging from 3 to 30 years in duration, where DOD was unable to conduct a competition because data rights were not purchased with the initial award. Within these 7 selected awards, justification content varied from addressing steps the agency would take to increase competition in the future to stating that the agency was taking no action to increase competition for these awards. For example: The justification for a $7 million Navy award for situational awareness and communication software explained that the agency and the contractor disagreed about the level of government data rights. The justification stated that the agency was negotiating with the contractor to obtain adequate data rights to develop a data package that will support competition for future acquisitions of software releases. The justification for a $3 million MDA award stated that the original equipment manufacturer for a cost and requirements management software system owned all of the data rights, necessitating a noncompetitive award for the system’s maintenance. However, the justification explained that the agency planned to end the noncompetitive award and transition to a different system by 2017, ensuring that necessary data rights are acquired at that time. A justification for an almost $6 million Navy contract for spare helicopter windshields addressed how the agency would increase future competition. The justification explained that the agency planned to compete this acquisition in the future by encouraging other vendors to submit a complete data package but did not address plans to purchase the necessary data. The justification stated that these articles are highly specialized and the data required for another vendor to manufacture these articles are not available. These parts have been continually acquired from the original equipment manufacturer for the past 25 years. Including the current 5-year contract, the government will have purchased these parts noncompetitively for a total of 30 years. A justification for a $9.5 million Army contract for M1A1 situational awareness tanks stated that the contractor had refused to sell the data rights and that the government would take no action to increase competition at this time because the government would suffer unacceptable delays. The contractor has refused to sell the process sheets and associated data needed for the remanufacturing process to compete this acquisition. The justification explained that the government will post an announcement for this requirement online and any bids or proposals will be considered. However, no other vendors have ever expressed interest in this acquisition. The focus on open systems architecture and acquiring effective types of data rights is changing the way DOD acquires goods and services. Programs are moving away from dependency upon single suppliers for parts, maintenance or upgrades and are moving toward open systems; these are designed to allow components to be added, removed, modified, replaced or maintained by multiple suppliers. The programs we sampled illustrate that leveraging open systems architecture and data rights to help promote competition involves early consideration and extensive analysis of how each system can best use these approaches to maintain a competitive environment throughout a program’s life cycle. Likewise, BBP fosters behaviors with the intent to promote competition. For example, according to program officials the BBP’s emphasis on open systems architecture and effective management of data rights resulted in increased competition for the Air Force’s Military Global Positioning System User Equipment and KC-46 Tanker Modernization programs. DOD officials told us that training is an effective way to change patterns of behavior and that to promote competition the agency needs an acquisition workforce that is educated on the various types of data rights. Programs report using open systems architecture and acquiring the necessary technical data rights to enable competition during development and throughout the acquisition life cycle. Based on questionnaire responses, programs are moving away from proprietary systems and toward systems that are designed to allow for future competition. As shown in table 3, 24 of the 31 weapons programs that responded to a 2012 GAO questionnaire reported that they were planning or had already used open systems architecture, and 14 of 31 had acquired or planned to acquire a complete technical data package. The following examples from the 10 programs in our sample illustrate how programs have or plan to leverage open systems architecture and acquisition of data rights to promote competition during development and throughout the life cycle. The Air Force has planned for sustained competition for its Three- Dimensional Expeditionary Long Range-Radar program. Program officials said that they used open systems architecture to maximize competition between multiple vendors and that they plan to acquire data rights to address long‐term sustainment and competition for future upgrades to the system. Further, the program is expected to require the contractor to clearly define and describe all component and system interfaces and ensure that this information is both accurate and available to other potential vendors. Specifically, all documentation that defines a component’s form, fit, function, and integration is to be delivered to the program with unlimited rights at a level of detail that will provide a developer, with comparable levels of expertise, the ability to further develop the system component. The Army Integrated Air and Missile Defense program began development in 2006. Open systems architecture and the acquisition of appropriate data rights were key components since the program’s conception. Program officials we spoke with stressed that open systems architecture is a key tenet for the evolution of the air and missile defense sensors and that decisions to incorporate open systems architecture and acquire data rights need to be made very early in program development. Early incorporation will enable the Army Integrated Air and Missile Defense to compete future production both at the system level and at the subsystem level. For example, the program is designed so that the most current technology can be inserted into just one component of the system through a competitive acquisition without having to make any changes to any other parts of the system. This design will allow the program to compete either the entire system or subcomponents when the system goes into production. Program officials at MDA’s Ground Based Midcourse Defense program conducted extensive data rights analysis and subsequently acquired all technical data required for successful competition of the development and sustainment contract. Specifically, the program released thousands of documents into a technical data library to be used by vendors that plan to bid on program contracts. Additionally, according to the program office, the contract includes language to ensure that future data are not limited or restricted to the government without prior written authorization from the procuring contracting officer. The Ship to Shore Connector program is the first naval acquisition program in more than 15 years to be designed in house by the Navy instead of by private industry. Officials from the program told us that because the program is responsible for the entire life of the Ship to Shore Connector program that all aspects of acquisition, including open systems architecture and the acquisition of technical data for lifecycle support of the craft was accounted for during the design process. Because of the lifecycle responsibility, they said that it is important for critical data not to become obsolete so a modular approach using standard interfaces was implemented to enable maintenance and support, and prevent obsolescence issues, where feasible. The program is procuring a technical data package in support of the program’s long-term technical data requirements for design, manufacture and sustainment. The data package is to support re-competition for production, sustainment and upgrades and will allow the future craft builder to contract with vendors to build components where the original contractor was also the manufacturer of the component. DOD’s BBP initiative is intended to improve DOD’s use of open systems architecture and effectively manage technical data rights. This is important given the relatively lower rate of competition for products (39 percent) compared to services (73 percent) from fiscal year 2009 through 2013. We found that BBP has affected decision making of some weapons programs in the use of open systems architecture and acquisition of technical data rights that enable competition throughout a program’s life cycle. Specifically, we identified two instances when major weapon system program offices were influenced by BBP to make changes that would promote competition: According to officials at the Air Force’s Military Global Positioning System User Equipment program, BBP led them to consider how open systems architecture and data rights could be used to obtain greater competition throughout the program’s sustainment. Program officials told us, that because of BBP, the program revised its Technical Development Strategy document to include the program requirement for contractors to implement open systems architecture principles and provide unlimited rights to technical and manufacturing data and government purpose rights to remaining non-commercial technical data licenses. We found evidence of these changes in the program’s Technology Development Strategy. This document was required at the decision point prior to the technology development phase of the defense acquisition process, and it includes a summary of how the program anticipates meeting the product life-cycle data rights requirements and supporting the overall competition strategy. The BBP’s emphasis on effective management of technical data rights resulted in improvements for the KC-46 tanker modernization program’s efforts to increase competition and reduce costs over the program’s life cycle. In particular, the Air Force conducted an analysis of the FAR, the DFARS and applicable intellectual property laws to ensure that the program acquired the sufficient data and licensing rights, including data, for operations, maintenance, installation, and training. The program obtained the operations, maintenance, installation, and training data for a fixed price and these data rights should allow the agency to maintain the system and compete both the development of the training systems and the reprocurement package for another system component. The Air Force was able to obtain the rights for the operations, maintenance, installation, and training data because the program required offerors to price data and include open systems architecture and standard interfaces to the maximum extent practical for a commercial derivative military aircraft. DOD officials also told us that training is a highly instrumental way to change patterns of behavior and that to promote competition, the agency needs an acquisition workforce that is educated on types of data rights. In 2013, as required by the BBP, the Defense Acquisition University released a series of seven continuous learning modules focused upon data management to provide fundamental knowledge required for acquisition professionals to create better data management plans and obtain necessary types of data rights in defense systems. This training builds upon the continuous learning module released in 2012 to introduce open systems architecture principles to acquisition professionals. To advance the agency’s knowledge of types of data rights, DOD has issued two updated guidance documents as required by the BBP and is developing further guidance that emphasizes the importance of creating and maintaining a competitive environment in order to improve DOD’s competitive posture: The Data Rights Brochure explains differences in types of data rights categories and the importance of anticipating the need for data and data rights. It also provides guidance to assist in identifying and resolving data rights issues prior to contract award. The Open Systems Architecture Contract Guidebook for Program Managers is to be used by the acquisition community to incorporate principles and practices of open systems architecture in the acquisition of systems or services. For example, this guidebook provides contract language to capture open architecture and an open business model to increase opportunities for competition, recommendations for writing a contract data requirements list and a statement of work that is based upon open systems architecture. It also contains instructions for obtaining effective levels of data rights to support full life-cycle competition. DOD officials emphasized that while the guidebook can assist program managers with including appropriate language into contracts, without the proper technical expertise, unsuitable language for the program could be chosen from the guidebook and be inserted into a contract. We previously concluded that incorporating open systems architecture into a program requires a highly knowledgeable workforce; further, we made a recommendation that DOD assess service-level and program office capabilities relating to an open systems approach and develop short-term and long-term strategies to address any capability gaps identified. Strategies could include the Navy’s cross-cutting approach where a team of a few technical experts within the Naval Air Systems Command could be available to work with program offices, as necessary, to help develop open systems plans. In 2010, DOD introduced new requirements for when full and open competition results in only one offer; however, these rules, as implemented in the DFARS, are focused late in the acquisition process and DOD officials have limited insight into the reasons only one offer was received. The one-offer awards we reviewed complied with DOD’s rules which require contracting officers to ensure solicitation periods allow at least 30 days for receipt of proposals and to conduct cost or price analysis. But these steps occur too late to impact competition, and actions can be taken much earlier in the acquisition planning process to encourage multiple offers. DOD contracting officials and vendors told us that engagement well before the 30-day solicitation period is key to ensuring vendors have adequate time to review draft requests for proposals, plan resources, provide feedback on potentially restrictive requirements, and determine through internal management processes whether it is worthwhile to prepare proposals. Limited information is available about reasons why only one offer is received because contracting teams seldom collect information from vendors, which could limit DOD’s ability to adjust acquisition strategies appropriately and plan for future acquisitions. The contracts and task orders we reviewed that were competed but received only one offer complied with DOD’s rules, nevertheless DOD continues to obligate significant amounts on one-offer awards. Specifically, in fiscal year 2013, DOD obligated a total of $22.6 billion on one-offer awards, or 13 percent of all competed fiscal year 2013 obligations. The Army and the Navy had the highest one-offer rates (21.1 percent and 17.5 percent of competed obligations respectively). MDA had the lowest one-offer rate (1.3 percent). The Air Force’s rate was 8.6 percent. In total, DOD awarded about 108,000 one-offer awards— about 1 percent of all new competed awards—and of these almost half were awarded by the Defense Logistics Agency. Across DOD, approximately 9,300 one-offer awards were valued above the simplified acquisition threshold—generally $150,000, below which the one-offer rules would not apply. The awards in our review followed the one-offer rules regarding solicitation periods and cost or price analysis, but none were subject to the program office consultation rule—that the contracting officer consult with the program office to determine whether requirements should be modified to promote more competition. For additional details on the competitive one-offer awards we reviewed, see appendix III. Contracting officials told us that they almost always keep solicitations open for at least 30 days, and have done so since before the one-offer rules were established. They also noted that it was standard practice to grant extensions if a vendor requested one. Twelve of the 15 awards we reviewed were initially open for 30 days or more. Of the remaining 3 awards: Two Army awards were not subject to the rules per an exemption for contingency, humanitarian, or peacekeeping operations. An Air Force award for software development and support was only open for 29 days due to miscounting the number of days, and received a waiver from the resolicitation rule. None of the awards we reviewed were subject to the June 2012 rule requiring contracting officers to consult with program offices regarding whether requirements should be modified to promote more competition. For two of the awards we reviewed, however, teams re-assessed requirements even though they were not required to do so. In one award that was exempt from the one-offer rules, the Army re-evaluated and changed its requirements to enhance competition and to address funding concerns. For another award which was initially open for 30 days, so not subject to the program office consultation rule, the Navy reassessed requirements because a potential offeror questioned whether the requirements were overly restrictive. Contracting officials subsequently determined they were not. However, neither of these awards received more than one offer. All 15 awards we reviewed complied with the rule to conduct cost or price analysis when only one offer is received. In four awards, DOD was able to negotiate lower prices as part of this process, decreasing costs between $1 million and $10 million, or 2 to 10 percent of total contract value. Even when only one offer is received, the government may still obtain some of the benefits of competition—particularly if the sole offeror is not aware that no other offers were received. In most cases, contracting officials said they thought the incumbents likely expected other vendors to offer proposals. There were several predecessor contracts that had multiple offers and, in other cases, solicitation time frames were extended at the request of a different vendor. In another instance, the vendor accepted contract terms, including government purpose data rights, which it had not accepted under a previous noncompetitive award. In addition, the vendor took a greater share of the financial risk for cost overruns than in the previous sole-source environment. Contracting officials also said they felt that the offered prices reflected a competitive market. For example, in six cases, offered prices were 4 to 26 percent lower than the government estimates. DOD’s requirements for competitions that result in only one offer do not focus on the acquisition planning phase when vendors’ initial engagement with government and internal business decision processes occur. Rather, the steps outlined in the one-offer rules all occur after the solicitation is published, which marks the end of the acquisition planning phase. Generally, the contracting officials for our cases did not feel that the length of time the solicitation was open was a reason only one offer was received, particularly because almost all of them were open for 30 days or more. In several cases, vendors requested more time and the contracting office extended the solicitation period beyond the initial 30 days, but the vendor still did not submit an offer. Vendors explained that they often have made their decisions whether to bid or not before the final request for proposals is published and the 30-day solicitation period begins. For example, in one case, contracting officials told us that a vendor they had expected to compete called 2 weeks before the solicitation was posted to tell them they had decided not to submit an offer because they were reserving their resources to bid on another agency’s contract. We spoke with some of the vendors identified in market research that did not submit offers for the awards we reviewed. Vendor representatives explained to us that the likelihood of their company choosing to make an offer is increased when they learn of a potential opportunity as early as possible and can engage with the government during the acquisition planning phase before the solicitation is issued. For instance, when sufficient time is allowed, vendors can discuss draft requirements documents with the government to identify any language that might unnecessarily preclude their solution from being considered. Further, vendors need adequate time to conduct internal discussions and analysis about what they might offer that could compete successfully against an incumbent. There are also internal management reviews and decision points prior to approval to submit an offer. Contracting officials we spoke to identified a number of actions they generally take to try to increase competition, many of which come early in the acquisition planning phase. They also stressed that early communication with industry about planned procurements is critical to give industry enough time to plan resources and make business decisions about whether to prepare an offer. Additional actions identified include the following: Reviewing requirements internally during the presolicitation phase to ensure that they are not overly restrictive, including legal review. Publishing draft requests for proposals and statements of work. For three awards we reviewed, officials published draft documents more than 6 months in advance of the solicitation. Contracting officers said that the questions received from industry in the draft phase help ensure requirements were not written too restrictively. Holding industry days, which also allow subcontractors to find teaming partners. Allowing access to a “bidders library” of technical data and drawings to even the playing field with the incumbent contractor. Allowing for a long transition period to signal an ability and willingness to bring on a new contractor. Limiting information requested from vendors to decrease the burden of preparing proposals. Previously, we found that allowing enough time in the acquisition planning process—before a solicitation is published—is important to help ensure adequate competition. In 2010, we found that program officials play a significant role in the contracting process—particularly in the acquisition planning process while developing requirements, performing market research, and interfacing with contractors—which can influence competition. Contracting officials noted that program offices sometimes do not allow enough time to execute a sufficiently robust acquisition planning process that could increase opportunities for competition. They told us that program offices are insufficiently aware of the amount of time needed to properly define requirements or conduct adequate market research. In 2011, we found that none of the agencies we reviewed had measured or provided guidance on the time required to perform key steps during the presolicitation acquisition planning phase. We recommended that they collect information needed to establish time frames for when program officials should begin acquisition planning. The agencies we reviewed had varied responses to this recommendation and one agency has taken initial steps to establish these time frames. When a long-standing incumbent contractor has been performing well, contracting officials said that vendors do not perceive a good chance of winning regardless of the government’s desire for competition and therefore do not bid. Contracting officials told us that the most common questions they get during the solicitation period are about who the incumbent is and whether their performance has been satisfactory. In addition, contracting officials said vendors are selective about making offers to keep proposal costs—which factor into their overhead rates—low in order to remain competitive on other awards. In making their business decisions about whether or not to submit offers, vendors told us that they look for signals about whether the government is willing to accept some risk by replacing the incumbent. For example, for one award we reviewed, the solicitation included a 6-month transition period in an attempt to signal to vendors that the program was willing to take the time to bring a non-incumbent vendor on board. For another contract we reviewed, a vendor told us they did not submit an offer because—based on interactions with government in the acquisition planning phase—they believed government was unwilling to take risks with the program that might be introduced by bringing in a new solution, and therefore the vendor’s chances of unseating the incumbent were too low to justify the expense of putting together a proposal. In addition, other vendors have told us they also consider various factors before submitting a proposal, such as: the cost of developing proposals; their ability to provide the services; rapport with the government personnel; and the potential financial gain from the procurement. Federal internal control standards call for managers to identify, analyze, and decide what actions should be taken to manage risk. For competitive acquisitions, this would include the risk that only one offer might be received. For the awards we reviewed, however, contracting officers seldom collected information about reasons only one offer was received, which could limit their ability to revise acquisition strategies appropriately or plan for future competitive acquisitions. In most cases, contracting officials anticipated they would receive more than one offer and told us they were surprised that they only received one offer. However, in 11 of 15 awards we reviewed, contracting officials did not have information from non-bidding vendors to understand why they chose not to submit an offer. For instance, according to the program director for one award, MDA has very limited insight into the reasons vendors choose not to submit offers. However, although they said they had been very surprised that only one offer was received, MDA officials responsible for this award had not followed up with the other potential vendors identified in the almost 2 years they had been preparing for competition. There is no requirement to engage with the vendor community to learn why they chose not to submit offers. In October 2009, OFPP issued guidance to help federal acquisition leaders evaluate the effectiveness of their agencies’ competition practices. The guidance included recommendations to engage the marketplace to determine how barriers to competition can be removed. This guidance recommended that agencies encourage their contract and program staff to speak to vendors, including leading competitors and others that expressed interest in the procurement, but ultimately did not submit offers to understand the basis for their decision not to participate. In 2010, we recommended that OFPP determine whether the FAR should be amended to require agencies to regularly review and critically evaluate the circumstances leading to only one offer being received and to identify additional steps that can be taken to increase the likelihood that multiple offers will be submitted. OFPP agreed with our recommendation but, to date, has not taken steps to implement it. In addition, DOD has not conducted a formal study of the reasons only one offer is received and the one-offer rules do not reflect this type of evaluation. Understanding the reasons only one offer was received can inform whether to revise acquisition strategies going forward. In two cases, contracting teams collected information from vendors that did not bid to understand what the reasons were. For one award, the contracting office requested additional information from the eight vendors with “no bid” responses on a multiple award task order contract. Six vendors felt they did not have the experience necessary to meet the requirements, and two vendors stated that they were partnering with the sole offeror as subcontractors. In another case, contracting officials did not reach out to potential bidders, but observed from the proposal that they had teamed with the sole offeror as subcontractors instead of choosing to compete. In the other instance, contracting officials said they learned that some vendors were in a teaming relationship with the incumbent that they did not want to jeopardize. Officials said that another vendor explained the release of this solicitation coincided with 24 other solicitations, and that if this solicitation had come out later it would have made it easier for them to submit an offer. Based in part on this information, the Navy changed their acquisition strategy to decrease the period of performance from 5 years to 2 years, allowing for another competition sooner than planned. Officials told us they may make other changes to the acquisition strategy for the next procurement as well, including breaking the requirement into pieces and using different contract vehicles. In contrast, we reviewed another award that was initially planned to be a multiple award task order contract, under which competition would continue on future task orders. When only one offer was received, DOD went forward and awarded a single award task order contract, with 1 base year and 4 option years. Under this arrangement, the agency will not get the benefit of additional competition for task orders. In several instances, too much time had passed for vendors to provide us with information about the reasons they chose not to bid on our selected contracts, either because the individuals involved were no longer with the company or because they could not recall the specific cases. With workforce turnover in the government and industry, the best time to collect information about the reasons vendors do not submit offers is likely before or soon after award. This may be less important if the requirement is not anticipated to re-occur in the future. For instance, we reviewed two awards where contracting officials told us they did not seek information about the reasons only one offer was received because they expected these to be the last contracts awarded for these requirements. DOD’s goal is to increase competition annually and strengthen competition in its acquisition of products and services. Half of the justifications we reviewed stated that the lack of technical data rights resulted in a barrier to competition. DOD’s BBP initiative requires programs to outline an approach to manage its data rights needs and to use open systems architecture where feasible. This should help DOD to obtain the appropriate data rights and use open systems architecture to increase competition throughout a program’s life cycle to save taxpayer dollars while providing the best available technology to the warfighter. DOD also has established a goal of increasing effective competition— where competitive procedures are used and more than one offer is received. However, the department will have difficulty accomplishing this goal without focusing its attention on factors that impact vendor business decisions. DOD’s current regulations help decrease some of the risks of one-offer awards, but focus on steps that occur too late in the process to effectively engage industry in competition. Enhancing the department’s acquisition planning guidance to ensure enough time and attention are provided for early vendor engagement could help encourage multiple offers. There will always be instances when the government cannot change vendors’ business decisions. However, the department is less able to make an impact on future acquisitions—or to adjust current acquisition approaches—for specific procurements if it lacks information about the reasons vendors chose not to submit offers. The department could mitigate the risk of future limitations on competition by seeking more information in certain cases, such as for high dollar value procurements or when it is likely the agency will repeat the procurement in the future. We recommend that the Secretary of Defense take the following two actions to continue to enhance competition: Ensure that existing acquisition planning guidance promotes early vendor engagement and allows both the government and vendors adequate time to complete their respective processes to prepare for competition. Establish guidance for when contracting officers should assess and document the reasons only one offer was received on competitive awards, including reviewing requirements to determine if they are overly restrictive and collecting feedback from potential vendors about the reasons they did not submit offers, taking into account dollar value and the likelihood the requirement is a recurring need. We provided a draft of this report to DOD for review and comment. In written comments, DOD concurred with our recommendations. DOD’s comments are reprinted in appendix IV. In responding to the first recommendation, DOD plans to issue guidance to acquisition planners to provide sufficient time for the vendors to review requirements and interact with government officials. We agree that it is important for DOD to effectively engage industry early in the acquisition process to mitigate factors that may hamper competition. In concurring with our second recommendation, the department agreed to provide guidance to contracting officers on the need to obtain feedback from vendors who expressed interest during the market research phase of competitive solicitations, but did who not submit a proposal. We believe that it is important that DOD assess and document the reasons only one offer was received on competitive awards, because doing so could help to promote future competition. DOD also provided technical comments that were incorporated, as appropriate. We are sending copies of this report to the appropriate congressional committees and to the Secretary of Defense. This report will also be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-4841 or by e-mail at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Staff who made key contributions to this report are listed in appendix V. The objectives for this review were to examine (1) the trends in DOD’s use of competitive awards, (2) the extent to which justifications for exceptions to competitive procedures were adequate and the reasons for the exceptions, (3) how DOD’s strategies aimed at promoting long-term competition are changing behavior, and (4) the extent to which DOD’s recent requirements address the reasons why only one offer was received for competitive solicitations. To address these objectives, we used data in the Federal Procurement Data System-Next Generation (FPDS-NG), which is the government’s procurement database. We assessed the reliability of FPDS-NG data by (1) performing electronic testing of required data elements, (2) reviewing existing information about the data and the system that produced them, and (3) comparing reported data to information from the contract files we sampled. We determined that the data were sufficiently reliable to examine the trends in DOD’s use of noncompetitive awards and the factors influencing DOD’s competition rate, including the number of awards, dollar amount obligated, and the percentage of contracts awarded competitively overall and by component. To further examine the trends in DOD’s use of noncompetitive awards, we used data from FPDS-NG to identify DOD obligations under competitive and noncompetitive contracts from fiscal year 2009 through 2013, the five most recent years for which complete data were available. For the purposes of this report, we defined noncompetitive obligations to include obligations through contracts that were awarded using the exceptions to full and open competition listed in Federal Acquisition Regulation (FAR) Subpart 6.3 (Other than Full and Open Competition). We also included noncompetitive orders issued under multiple award indefinite delivery/indefinite quantity contracts or under the General Service Administration’s schedules program. Specifically, we identified contracts and task orders funded and contracted by DOD. For competitive contract actions, we included contracts and orders coded as “full and open competition,” “full and open after exclusion of sources,” and “competed under simplified acquisition procedures” as well as orders coded as “subject to fair opportunity” and as “fair opportunity given,” and “competitive set aside.” For noncompetitive contract actions, we included contracts and orders coded as “not competed,” “not available for competition,” and “not competed under simplified acquisition procedures,” as well as orders coded as an exception to “subject to fair opportunity”, including “urgency,” “only one source,” “minimum guarantee,” “follow-on action following competitive initial action,” “other statutory authority,” and “sole source.” We calculated competition rates as the percentage of obligations on competitive contracts and orders over all obligations on contracts and orders annually. We examined the competition rate at the DOD level and at four components: Air Force, Army, Missile Defense Agency (MDA), and Navy from fiscal year 2009 through 2013. We also reviewed competition reports published by DOD and the military services. In addition, to obtain insight into what was being purchased noncompetitively we analyzed product service codes data for the products or services that has the highest noncompetitive obligations. These codes indicate what was bought for each contract action reported in FPDS-NG. We calculated the competition rate as the dollars obligated annually on competitive contracts and orders as a percentage of dollars obligated on all contracts and orders. For fiscal years 2009 through 2013, we analyzed the competition rate for products, non-research and development (R&D) services, and R&D services. Also, we identified FPDS-NG data to determine the impact of foreign military sale (FMS) awards on DOD’s and the components’ competition rates. For FMS awards, we included contracts and orders coded as “foreign funds FMS” in FPDS-NG. We also assessed the exceptions cited in FPDS-NG for new noncompetitive DOD contracts and task orders in fiscal years 2009 through 2013. To review the extent to which justifications for exceptions were adequate and the reasons for the exceptions, we examined the FAR Part 6 (Competition Requirements), Subpart 8.405 (Ordering Procedures for Federal Supply Schedules), and Subpart 16.505 (Indefinite-Delivery Contracts, Ordering). To determine if recent justification documents complied with these requirements, we randomly selected 15 contracts and orders coded as noncompetitive in FPDS-NG. Specifically, we identified the two-digit product service code categories with the highest dollar obligations. We then randomly selected one contract or task order for each of these 15 product service codes from April 1, 2012, through March 31, 2013. We ensured that our selection contained only contracts or orders with a base and options values exceeding $650,000 and that it included at least one award from the Air Force, Army, MDA, and Navy. We also excluded from our review contracts or orders awarded under simplified acquisition procedures and noncompetitive orders that were not subject to multiple award fair opportunity as well as exceptions that do not require a justification, such as international agreements. Our sample was reduced to 14 when we removed 1 contract because it was miscoded. Our final sample included 3 Air Force awards, 5 Army awards, 1 MDA award, and 5 Navy awards. See appendix II for more details on the selected noncompetitive awards. For the awards in our sample, we requested and reviewed the signed justification and approval documents and additional documentation in the contract files, including the first page of the signed contract, acquisition plan, price negotiation memorandum, documentation of market research, and statement of work/performance work statement and documentation, if any, that the justification was posted on Federal Business Opportunities website including the dates posted. We assessed justifications and additional documentation for the 14 selected contracts or task orders against elements in the FAR such as content, timing, approval, and public availability. As needed, we contacted contract officials involved with awarding these contracts to obtain additional information so we could better understand the analysis conducted that resulted in the decision to award these contracts or orders noncompetitively. To study how DOD’s strategies aimed at promoting long-term competition are changing behavior we selected a nongeneralizable sample of 10 major weapons systems programs. Our selection was based on 31 responses received from program offices on a questionnaire developed for GAO’s fiscal year 2013 weapons assessment. We analyzed the survey responses for programs received for that report, where sent a questionnaire to 65 defense acquisition programs and sub-elements of programs to determine the extent to which programs were implementing acquisition reforms. We selected 10 programs that had responded that the program may use, will use, or had used open system architecture, and also responded that the program may acquire, will acquire or had acquired technical data. We did not select programs that had responded that use of open systems architecture or acquisition of technical data rights would not take place or the programs that did not respond to these questions. We selected five major defense acquisition programs and five future major defense acquisition programs. Further, we made certain that our sample contained programs from the Air Force, Army, Navy, and MDA. To learn how these programs are using open systems architecture and acquiring effective data rights to promote competition, and what informed the process that led to these decisions, we contacted officials from each program to request interviews and we reviewed program documents. We also interviewed officials from the Office of the Secretary of Defense and the Office of the Secretary of the Navy on the use of open systems architecture and acquiring effective data rights. In addition, we interviewed competition advocates at the Air Force, Army, MDA, and Navy to discuss recent initiatives to promote long-term competition. We did not evaluate the entire program or the outcome of actions described to increase future competition. To examine the extent to which DOD’s requirements address the reasons why only one offer was received for competitive solicitations, we examined DOD policies, regulations, and other related documents. To determine whether recent awards complied with the requirements, we reviewed a nongeneralizable sample of 15 contracts and task orders. Only awards for which one offer was received in response to a solicitation issued using competitive procedures, as coded in FPDS-NG, were included in the sample. The sample included the largest dollar value award from each of the 15 largest product service categories, measured by obligations, made from April 1, 2012, through March 31, 2013. For one product service code, we selected the second largest award to ensure that we reviewed at least one award from the following components: Air Force, Army, MDA and Navy. The sample included 4 Air Force awards, 5 Army awards, 1 MDA award, and 5 Navy awards. See appendix III for more details on the selected one-offer awards. For each selected award, we obtained evidence of the solicitation issuance and proposal due date, documentation of cost or price analysis, and other key information. We interviewed contracting officials involved with each award to understand the competitive environment for each award and the reasons why one offer was received. We also e-mailed or interviewed several vendors who had expressed interest in some of these awards but chose not to submit offers. We assessed recent DOD implementing regulations to determine whether key reasons for one-offer awards were addressed. We conducted this performance audit from May 2013 to May 2014, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Penny Berrier, Assistant Director; Alexandra Dew Silva; Lisa Fisher; Laura Greifner, Julia Kennon; Jean McSween; Kenneth Patton; Jose Ramos; Roxanna Sun; and Alyssa Weir also made key contributions to this report.
Competition is the cornerstone of a sound acquisition process. In fiscal year 2013, DOD obligated over $300 billion through contracts and orders, of which 57 percent was competed. DOD also obligates billions of dollars annually on contracts that are awarded using competitive procedures, but for which the government received only one offer. DOD implemented the Better Buying Power initiative in 2010, in part to increase competition. The conference report accompanying the National Defense Authorization Act for Fiscal Year 2012 mandated GAO to report on DOD's noncompetitive and one-offer awards. GAO examined (1) the trends in DOD's use of competitive awards, (2) the extent to which justifications for exceptions to competitive procedures were adequate and reasons for exceptions, (3) how DOD's strategies aimed at promoting long-term competition are changing behavior, and (4) whether DOD's requirements address reasons only one offer was received for competitive solicitations. GAO analyzed federal procurement data for fiscal years 2009 through 2013; reviewed DOD policy and competition reports; examined two nongeneralizable samples of 14 and 15 awards, in part, based on dollar value; and interviewed DOD officials. The Department of Defense's (DOD) competition rate for all contract obligations declined over the past 5 fiscal years from 62 percent in fiscal year 2009 to 57 percent in fiscal year 2013, but remained flat for the past 2 years. In fiscal year 2013, the Army had the highest competition rate, 66 percent, while the Missile Defense Agency had the lowest competition rate, 29 percent. The 14 justifications for noncompetitive awards that GAO reviewed generally included the elements required by the Federal Acquisition Regulation such as the authority permitting other than full and open competition. The majority of DOD's noncompetitive contracts and task orders (including all in GAO's sample) were coded under the “only one responsible source” exception to competition requirements. Seven of the 14 justifications explained that the awards could not be competed due to a lack of technical data. In these cases, DOD did not purchase the necessary data rights with the initial award. In some cases the justifications provided insight into how a lack of data rights resulted in reliance on a single vendor over time. DOD's focus on using open systems architecture and acquiring sufficient data rights—which DOD's Better Buying Power memo encourages—is influencing the way DOD acquires goods and services. Programs are trying to move away from dependency upon single suppliers for parts, maintenance or upgrades and are moving toward open systems architecture, which allows components to be modified, replaced or maintained by multiple suppliers. Some DOD programs have shown that using open systems architecture and obtaining data rights involves early consideration and extensive analysis of how each system can best use these approaches to maintain a competitive environment throughout a program's lifecycle. For example, an emphasis on open systems architecture and effective management of data rights resulted in increased competition for the Air Force's user equipment for the Global Positioning System and KC-46 Tanker Modernization programs. In 2010, DOD introduced requirements for competitive solicitations that result in only one offer; however, these rules are focused late in the acquisition process and DOD has limited insight into the reasons only one offer is received. The 15 one-offer awards GAO reviewed generally satisfied DOD's rules, which require contracting officers to ensure adequate solicitation periods and conduct cost or price analysis. These rules were intended to help ensure more effective competition but may apply too late in the acquisition process. DOD contracting officials and vendors told GAO that engagement with vendors well before the 30 day solicitation period is key to ensuring vendors have adequate time to review draft requests for proposals, plan resources, provide feedback on potentially restrictive requirements, and determine whether to prepare proposals. Moreover, contracting officers for the contracts GAO reviewed seldom collected information about reasons only one offer was received, which could limit their ability to revise acquisition strategies appropriately or plan for future competitive acquisitions. DOD's one-offer rules do not require contracting officials to engage with the vendor community to learn why vendors chose not to submit offers. However, contracting officials chose to do so in two sample cases, and in one case, based on this information, changed the acquisition strategy to allow for recompetition sooner than planned. DOD should ensure that existing acquisition planning guidance promotes early vendor engagement, and establish guidance for when contracting officers should assess the reasons only one offer was received on competitive awards. DOD concurred with these recommendations.
In 2004, the President issued Executive Order 13327 establishing the FRPC, composed of senior federal real property managers, and representatives from the Office of Management and Budget (OMB) and the General Services Administration (GSA), among others. The executive order required FRPC to work with GSA to establish and maintain a single, comprehensive database describing the nature, use, and extent of all real property under the custody and control of executive branch agencies, except when otherwise required for reasons of national security. To meet this requirement, GSA in coordination with FRPC established the FRPP and provides guidance to agencies about how to annually report real property under the custody and control of executive branch agencies in three categories: land, buildings, and structures. Agencies are required to annually submit 23 separate data elements to FRPP for all of their structures. The data elements include basic inventory data (type, use, size, and location) and other elements (condition, replacement value, operating costs, congressional district, and historical status). Some of the FRPP data elements differ for structures as compared with the data for buildings or land. For example, for measures of size, which are standard square feet for buildings or acres for land, agencies should report a unit of measure based on the type of structure (such as linear feet for canals or lane miles or square yards for roads and bridges). The FRPC also made some changes to the fiscal year 2013 FRPP guidance related to how agencies collect and report data for structures. For example, the FRPP will no longer contain information on mission dependency, will add a new field for repair needs, and will automatically calculate the condition index using the replacement value field and the newly created repair needs field. As we stated in the 2013 update to the High-Risk Series, although some progress has been made in obtaining data about federal real property, the government still continues to lack consistent, accurate, and useful data that could support strategic decision-making about federal real property. Internal control standards for federal executive branch agencies require that agencies have relevant, reliable, and timely information for decision- making and external-reporting purposes. OMB guidelines state that agencies should develop detailed guidance necessary for producing quality data. Among other things, OMB’s definition of quality ensures that accurate, reliable, and unbiased information is presented in an accurate, clear, complete, and unbiased manner. These guidelines state that agencies should treat information quality as integral to every step in the creation of that information, from creation, collection, maintenance, and dissemination. We have also found that consistency means that the data are sufficiently clear and well-defined to yield similar results in similar situations. The Government Accountability and Transparency Board, established in 2011 to provide strategic direction for enhancing transparency of federal spending data, found that lack of consistent data creates obstacles to transparency and accountability. In addition, it determined that consistent data promote more accurate and comparable data for improved reporting and decision-making. However, we also found in 2012 that collecting and analyzing data creates costs for federal agencies as they must direct time and staff resources to this task and emphasized the importance of limiting the number of measures to the vital few measures considered essential for producing data for decision- making. In the fiscal year 2012 FRPP, federal agencies reported that they were responsible for over 480,000 structures. Of the nearly 176,000 structures that federal civilian agencies were responsible for, about 98 percent were owned by the federal government and about 2 percent were leased. The five agencies we selected for our review were responsible for 83 percent of civilian federal structures. The FRPP categorizes structures into 22 different types. The most commonly reported type was roads and bridges followed by recreational structures, which include outdoor recreational structures such as athletic fields and courts, stadiums, golf courses, and ski slopes.structures as reported by federal civilian agencies. Agencies take different approaches to defining and inventorying structures making the aggregation of data in the FRPP’s database unreliable. Agencies we reviewed defined structures differently leading to inconsistencies as to what assets are included in the FRPP, including counting some building-like facilities as structures. We also found that the agencies we reviewed counted structures differently, provided inaccurate location information, and categorized their structures inconsistently, all of which limits the usefulness of their data on structures in the FRPP. Additionally, the agencies we reviewed submitted outdated or incorrect information for key data elements, such as the replacement value, annual operating costs, and condition. GSA officials that manage FRPP said that FRPC chose to provide flexibility in the reporting guidance to account for the wide diversity in federal structures, but the FRPP also aggregates the data as if they were comparable. Even if this data were useful, FRPC reports very little of what it collects from agencies and officials at GSA told us that there is low interest in and demand for information on structures, a situation that creates few incentives to improve data reliability. In addition, OMB officials stated that their focus in recent years has been primarily on buildings relative to structures. In prior reports, we have stressed the importance of limiting the number of elements to the vital few that are considered essential for producing data for decision making in light of the costs of collecting this data. Agencies we reviewed defined a structure differently when inventorying their assets. Differing definitions resulted in inconsistent data as different types of assets were being labeled as structures across agencies (see table 2). Two of the agencies we reviewed—USDA and DOE—did not develop a standard definition for a structure. OMB guidelines state that agencies should develop detailed guidance necessary for producing reliable, consistent data. GSA real property officials responsible for the FRPP stated that they, in coordination with FRPC, chose to give agencies the flexibility to define structures themselves and that FRPP guidance, therefore, does not define a structure. For the agencies we reviewed, these different definitions led to the inconsistent identification of similar structures when aggregated across agencies, thereby reducing the reliability and accuracy of FRPP data on structures. For example, VA has created an “other” category that is different from buildings, land, and structures. This category includes monuments, statues, and flagpoles—items other agencies report as structures in FRPP. While this approach may be legitimate for VA purposes as long as it is applied consistently, it will not create consistent information government-wide when aggregated in FRPP. Because USDA and DOE did not define what constituted a structure, the categorization of structures may vary by installation, resulting in inconsistent information within the agencies. For example, officials at DOE’s Lawrence Livermore National Laboratory (LLNL) and the Interior’s Bureau of Land Management (BLM) Fort Ord National Monument classified the landscaping at a location as a structure in FRPP. Conversely, officials at DOE’s Argonne National Laboratory did not classify landscaping, such as a fountain in front of a building, as a structure but considered it part of the building. This approach resulted in variation within DOE at the installation level, as well as being inconsistent with the FRPP guidance, which instructs agencies to report landscaping as a structure under “All Other.” Some facilities we visited were classified by some agencies as structures, even though they were similar to buildings (having features such as walls, roofs, doors, windows, and air-conditioning systems in some cases). Figure 2 shows some of these examples. GSA officials also said that different approaches to defining structures are legitimate under the guidance, but agreed that best practices require a consistent agency-wide approach. However, as a starting point to ensure that all agencies have a similar understanding of what constitutes a structure, the FRPC should update the FRPP guidance to include such a definition. Until this action occurs, there will be the increased likelihood that agencies will continue to define structures differently, thus negatively affecting the reliability of the data being collected. We found that officials, across agencies we reviewed, also counted structures differently, undermining the accuracy of the number of structures when totaled nationwide. OMB guidelines state that agencies should develop detailed guidance necessary for producing reliable, consistent data.count structures, and GSA officials stated that agencies can use different approaches as long as they are consistently applied by those agencies. GSA officials that manage FRPP said that they recognize that flexibility in the guidance could result in differences in how agencies designate structures, thereby creating issues with how comparable the data are across agencies. FRPP guidance does not instruct agencies on how to We found inconsistencies in how officials at different agencies for the sites we visited counted the same types of structures. For example, officials at the DOE sites we visited aggregated primary roads together into singular entries in the FRPP, while officials at the DOT sites we visited generally listed each road as separate entries. We also found that some officials at the agencies we visited separated features of a structure into multiple FRPP entries while other officials included all features as a one-structure entry. For example, at the Interior’s Bureau of Reclamation (BOR) sites we visited, officials manage large structures, such as power plants, dams, and canals, generally grouping various portions of the supporting infrastructure related to a main asset together into a single entry in the FRPP. Conversely, officials at the FAA sites we visited generally disaggregated the components of structures into multiple entries in the FRPP because the agency officials track their expenditures in multiple systems, and renovations sometimes result in additional entries. See figure 3 for an example of how agencies count structures differently. We found inconsistencies within agencies when counting structures. For example, Interior officials at the sites we visited would both aggregate and disaggregate roads. Officials at the BLM site we visited and all four NPS sites we visited, listed each road separately, while officials at the BOR sites and one of the Interior’s U.S. Fish and Wildlife Service (FWS) sites we visited sometimes included roads with related infrastructure, such as flood control dikes. Similarly, we found that officials at some of the sites we visited, such as the VA and DOE sites, included all the parts of a utility system—such as sewage and electrical systems and their components— into one entry, while officials at other sites, such as NPS’s Prince William Forest Park in Virginia separate out certain parts of utility systems into multiple entries. Officials from these agencies told us that the decision to aggregate or disaggregate structures in the FRPP is dependent on how the assets are being managed. For example, Interior officials said that it makes sense to combine a road with an asset if it is integral to and maintained along with that asset. However, this kind of variation undermines the reliability of both the aggregated agency and FRPP data. We also found that structures without operation and maintenance costs may not even be included in some real property databases. For example, officials at USDA’s Beltsville Agricultural Research Center (BARC) site said structures that existed before the creation of the FRPP do not appear in the database unless USDA has spent money to repair or replace them. As a result, USDA officials estimated that there were many more structures located at the BARC site, which is over 100 years old, than the 125 structures listed in the FRPP. However, FRPP guidance requires the inclusion of this information; the FRPP database is intended to be a comprehensive real property database of real property assets. Without guidance to ensure a common understanding, agencies will likely continue to count structures differently thus negatively affecting the reliability of the information being collected. Although all of the agencies we reviewed provided location information for their structures, we found that the location data for several of these structures were inaccurate, thereby limiting the usefulness of FRPP data. FRPP guidance requires that location data be included in the database, but allows agencies flexibility in terms of the specificity used to identify the location of structures. For example, agencies may use the street address or the longitude and latitude coordinates. However, the Standards for Internal Control in the Federal Government and OMB guidance state that management should put in place control mechanisms and activities to enable it to enforce its directives and achieve results. In addition, GAO and OMB guidance state that one such result should be providing data that are consistent and reliable. At several sites we visited, local agency officials could not specify the structure using the data listed in the FRPP database. For example, we found at VA sites that the local officials could not confirm that the pieces of infrastructure they showed us related to the sites’ utility systems, such as the power distribution or water systems, were a part of the structure listed in the FRPP database. Even in instances where latitude and longitude coordinates were listed in the database, agencies’ officials had difficulties finding some structures. For example, local BOR officials at the San Luis Dam and Reservoir in California could not locate a recreational structure valued at about $96 million. Although there were specific coordinates listed in the FRPP for that structure, officials stated that there was nothing there that could be considered a recreational structure worth that amount. While local officials were unable to use the FRPP location information to identify this structure, headquarters agency officials explained that this structure includes a variety of recreation components, including roads and other amenities, located within that geographical area and that the latitude and longitude represented a specific point within that area, not a specific structure. At some of the sites we visited, officials told us they had difficulties determining location for larger structures, such as canals, roads, airport runways, and irrigation systems, some of which can span several miles and even cross county and state lines. In these instances, agency officials told us they measure coordinates at one end or at the center of a structure, which does not accurately capture the structure’s location. For example, at one of the BOR sites we visited, officials said they measure the geographic coordinates at the beginning of canals that can span over 100 miles. Officials from two agencies we reviewed do not use geographic coordinates to identify structures’ locations and said measuring the coordinates is too challenging and that there was limited value for expending resources needed to measure coordinates for structures that are already known to the local facility managers at their sites. Instead, officials use a central address for the structures they manage, which could be a long distance from the actual structure. These challenges may result in inaccurate location data on structures in the FRPP. We found inconsistencies with how officials categorize structures that may limit the usefulness of the data. OMB guidelines state that agencies should develop detailed guidance necessary for producing reliable, consistent data. As stated above, FRPP guidance categorizes structures into 22 different types and includes brief descriptions of the types of structures for each category. The guidance allows similar structures to be categorized differently. For example, FRPP guidance describes three different categories for which dams could fit, such as power development and distribution, reclamation and irrigation, and flood control and navigation. This categorization means that these dams could be included in these FRPP categories along with other non-dam structures (such as power plants, canals, or docks). We found that five different dams in five locations, all of which served different purposes were categorized differently in the FRPP database (see fig. 4 below). This makes it even more difficult to make decisions based upon numbers of structures in the categories because structures comprise such varied assets, with some similar type assets, such as dams, reported in different categories. Officials from sites we visited from three of the five agencies we reviewed told us that they have difficulty identifying the appropriate category for structures because the agencies’ structures vary and have unique characteristics. As a result, these agencies frequently use the FRPP catchall, “All Other,” category when their structures do not fit within the other 21 categories; civilian agencies used the “All Other” category 23,294 times in fiscal year 2012 elevating it to the third largest category, accounting for 13.2 percent of structures listed. For example, during our site visits, officials showed examples of structures that were categorized as “All Other” in the FRPP database, such as fences, sidewalks and paths, observation decks and platforms, lagoons, and signs. These structures are legitimately reported in the “All Other” category, but the FRPP database does not allow for further disaggregation limiting the usefulness of the category for identifying the type of the structure. GSA officials recognized that that the 22 categories of structures in the FRPP do not capture the wide variety of different structures agencies operate. However, the GSA officials said that because of a lack of detail as to what the agencies include in the “All Other” category, they are unsure of what category additions or changes they should make. These officials acknowledged that it would be difficult to develop a comprehensive list of categories. Wide use of the “All Other” category reduces the usefulness of FRPP for managing structures by limiting the amount of detail that the database can have. Key FRPP elements for structures—replacement value, annual operating costs, and condition—are not reliable because some of the data GSA submitted by agencies we reviewed are outdated or incorrect.officials said that while they have taken steps to improve data quality, they ultimately rely on the agencies, which are required to certify that the data they transmit to FRPP are complete and reliable. Specifically, GSA officials said that agency submissions are not altered in any way once submitted to FRPP, meaning that any inaccuracies originated at the agency level. FRPP guidance states that the intent is for agencies to define their own guidance and regulations for implementing the replacement value formula found in the guidance. It also states that GSA and DOD have published cost guidance that can be used by other agencies. Officials from DOT and local officials from one USDA site we visited told us they use the DOD’s Facilities Pricing Guide for estimating the replacement value for certain structures. achieve results. GAO and OMB guidance state that one such result should be providing data that are consistent and reliable. Without this guidance, agencies cannot ensure that they are collecting and reporting consistent and reliable data on their structures internally, data that are then submitted to FRPP. For example, officials told us that replacement values in the FRPP seemed too high or too low, and other officials told us the replacement value did not match or come close to matching the replacement value listed in agencies’ own property-management databases (see fig. 5). However, because local agency officials were often not the ones entering data into the agencies’ real property databases or entering the data into the FRPP database, they could not explain why there were differences between FRPP and their agency’s own property databases for these structures. To calculate the replacement value for all structures annually as required by FRPP guidance, the agencies we reviewed reported using a number of different methods, including adjusting cost models designed for buildings to structures, escalating the original cost of constructing structures, and relying on estimates made by local officials and experts. While there are cost-estimating models available for calculating the replacement value of a building, there are no standard estimation methods available for all types of structures. Officials from Interior, VA, and DOE said that the lack of standard cost-estimating models for structures makes it more challenging and could introduce variation into the estimates as agencies develop their own models. Although the agencies we reviewed identified annual operating costs for structures as required by FRPP guidance, we found that these costs were not always accurate, thereby reducing the consistency and reliability of FRPP data. As stated earlier, the Standards for Internal Control in the Federal Government and OMB guidance state that management should put in place control mechanisms and activities to enable it to enforce its directives and achieve results. In addition, GAO and OMB guidance state that one such result should be providing data that are consistent and reliable. Officials from one USDA site we visited said that none of the annual operating costs listed in the FRPP are accurate because the calculation is based on a standard formula of 1 percent of the replacement value, which does not reflect the structures’ true costs. USDA officials agreed that their approach to calculating the operating costs did not produce accurate results at the individual structure level, but they said that USDA does not have the capacity to collect operating cost data for individual structures. Although FRPP guidance does not address the issue, GSA officials recognize that agencies must estimate the operating costs if they do not have the capacity to track operating costs for individual structures. We also found instances where local officials at sites we visited told us the annual operating costs listed in the FRPP are inaccurate. For example, officials at several of the sites we visited could not explain why some structures had zero operating costs listed in the FRPP or why some of the FRPP listed costs were higher compared to the amount of maintenance performed related to the structure. However, headquarters officials with VA said that structures listed as having zero operating costs, such as the committal shelter at the Quantico (VA) National Cemetery shown in Figure 6, are included in the operating costs under the associated building or land entries in FRPP. Additionally, we found that some agencies reported the costs of structures operated and maintained by other entities. For example, officials at the BOR sites we visited in Arizona and California reported annual operating costs of $1.9 million and $3.8 million attributed to the Mark Wilmer Pumping Plant and the Delta-Mendota Canal, which includes the Tracy C.W. “Bill” Jones Pumping Plant. However, these two assets are fully managed and operated by state and district entities and are fully funded from the revenues generated from the sale of water and electricity. Interior officials told us these costs are to be reported because, as a term-limited management agreement, future responsibility for maintenance is still a federal liability, and the FRPP guidance instructs agencies to report annual operating costs for structures managed by other entities. However, the actual costs paid by the federal government on an annual basis may be overestimated in the FRPP, because according to local Interior officials, a portion of those costs are paid from revenues collected by the state and district entities. We also found that USDA reported the annual operating costs for a bullet trap on a weapons range located at BARC, but the bullet trap is fully maintained and operated by a separate federal entity. USDA officials said that they were following FRPP guidance as they understood it. Officials from the agencies we reviewed also noted that it is often challenging to calculate annual operating costs for structures and that different approaches may be used for different structures within agencies. For example, agency officials at some sites we visited told us they use associated costs—such as labor, utilities, and maintenance—to report annual operating costs. However, officials from DOE and VA sites that we visited told us that these data can be difficult to calculate at the individual asset level. Instead, they apportion certain types of costs, such as for electricity, evenly across certain structures. Different approaches within agencies undermine the consistency and, consequently, the reliability of the operating-cost data for these agencies. Although the agencies we reviewed reported the condition of their structures as required by FRPP guidance, the FRPP data on the condition of these structures were not always accurate. According to FRPP guidance, condition index is a general measure of the constructed asset’s condition and is calculated using the ratio of repair needs to the replacement value. We found numerous examples at the sites we visited where the listed FRPP condition did not match the observed condition of the structure. For example, we found a parking lot and a road at one FAA site that had a condition index listed as zero in the FRPP (which, according to FRPP guidance, represents the worst possible condition for the asset), and we found cooling towers at DOE’s LLNL that had a condition index listed as 100. However, we did not find these structures to be in the most critical or excellent conditions. Officials from Interior, VA, and DOE told us they will also sometimes submit zero-dollar repair needs to FRPP for some of their structures that they no longer use and may be ultimately disposed of, even though the structure may be in disrepair, because it allows the agencies to prioritize funds for other assets. However, this can result in the inaccurate reporting of the structure’s conditions in the FRPP because the condition index calculation relies on the amount of repair needs. For example, agency officials responsible for FRPP data at the FWS’s Don Edwards San Francisco National Wildlife Refuge calculated the condition index for a historic cannery as 100, which would indicate the best condition reportable. However, as shown in figure 7, the condition index of the structure does not match its true condition. Agency officials told us the reason the condition is listed as 100 is because they entered zero dollars needed for repair to free funds for other structures, as this structure is not open to the public and does not currently serve a purpose on the FWS property. Also, officials told us if there is no intent to repair or maintain a structure, there is no reason to spend limited resources to complete a comprehensive condition assessment. Officials from NPS’s Golden Gate National Recreation Area told us that a historic battery built for use during World War I and valued at almost $200 million, had about $450,000 in deferred maintenance, but we found its condition listed as 100 in the FRPP. According to officials at USDA’s BARC site, all of the listed conditions for the structures we viewed at the site were inaccurate. These officials could not explain why the condition index, calculated by an independent contractor, was the same for all but one of the structures, and they agreed these metrics were not reflective of the structures’ true conditions. As stated above, the FRPC has made changes for 2013 by having the FRPP automatically calculate the condition index using the formula stated above as well. While the formula has not changed, the FRPC has also revised FRPP guidance to require that agencies report all the repair needs, including the repairs agencies do not plan to make. OMB staff told us they hope this will prevent agencies from submitting inflated conditions. Even if data on federal structures were reliable, FRPC reports very little of what it collects from agencies. Internal control standards for federal executive branch agencies require that agencies have relevant information for decision-making and external-reporting purposes. For fiscal year 2012, public access to FRPP data are limited to a 23-page, high-level summary report for all 361,318 federal buildings, 485,866 federal structures, and about 44 million acres of federal land. The high- level summary report includes aggregated data on 5 of the 23 elements that agencies are required to submit. This raises questions about the importance of aggregating structures data government-wide. As stated earlier, in previous reports we have stressed the importance of limiting the number of elements to the vital few that are considered essential for producing data for decision making in light of the costs of collecting this data. This will help agencies focus their limited resources on ensuring that those vital elements are reliable. In addition, GSA and OMB officials said there is low interest in and demand for government-wide information specifically related to structures, resulting in little incentive to make improvements. They said the majority of requests for information from users of FRPP data—the administration, the Congress, and the public— are related to buildings, not structures, so they have focused their efforts at improving and more extensively evaluating FRPP data related to buildings. Based on conversations we had with OMB staff, FRPP building data may be of more interest than structures because buildings that are occupied by federal workers and visited by the public have safety, security, and resale factors that do not generally exist for structures. Buildings also can have value to the private sector, making them targets for sale, while structures are less likely to have commercial value and (like some federally-owned buildings) could also be located inside a large federal land area. Figure 8 illustrates how some structures likely have no private-sector value. While OMB staff acknowledged that the FRPP’s data on structures have reliability issues and that there is lower demand for information on structures compared to buildings, they also said that structures represent investments of taxpayer money and as such agencies should continue to track their structures because the data are valuable to agency officials. Agency officials consistently said that they would continue tracking their structures even if they did not submit it annually to the FRPP. However, some agencies might not track the same elements. For example, FAA officials said that the agency only tracks the congressional district of each of their structures because it is a required element in the FRPP guidance. We found that agencies generally face similar challenges in managing structures as they do in managing buildings. All agency officials we spoke with stated that most challenges centered on prioritizing resources to maintain structures, ensuring the safety and security of structures, and disposing of excess structures. Officials from all the agencies we spoke with stated that prioritizing resources is their primary challenge to managing structures. For example, Interior officials stated that their major challenge is to maintain Interior’s mission critical assets as the current funding levels are less than half of the minimum they consider necessary to sustain acceptable conditions. We have previously developed criteria for addressing real property maintenance backlogs based on National Academy of Sciences reports on maintenance and repair of federal facilities. Our criteria includes, among other things, setting priorities among the outcomes to be achieved from maintenance activities, identifying critical assets to invest in, and analyzing tradeoffs and optimizing results from competing investments in maintenance. Interior officials told us they provide department-wide guidance for capital investment strategies. Following this guidance, for example, NPS has developed an investment matrix that combines mission criticality and historic importance with the amount of deferred maintenance to determine which structures to invest in first. FAA officials in one FAA region are using their own unique database to determine the maintenance costs for structures to prevent failure of mission critical structures that support the national airspace. FAA is currently undertaking an agency-wide initiative to address their deferred maintenance needs for these structures. Agency officials at the sites we visited stated some security and safety challenges were related to the location and condition of structures for which they were responsible. One of the security concerns mentioned by some NPS and BLM officials was that some structures were spread out over the park or other federal land area making it challenging to ensure their security. For example, the NPS’ Golden Gate National Recreational Area has over 500 structures spread out over 60 miles around the San Francisco Bay area, and BLM’s Fort Ord National Monument has 67 structures spread out over 7,200 acres. NPS officials also mentioned that there are challenges in securing nationally significant structures in National Parks while encouraging the public to visit those structures and working to provide a favorable experience during their visit. Other structures present security and safety challenges different from buildings. NPS officials stated that some structures may be in rugged terrain, have multiple points of approach, all making security and safety more challenging than for buildings where security and safety features can be built in and access controlled more easily. The BOR’s 117-mile long Delta-Mendota Canal in California, which provides critical water resources to southern California, presents a security challenge due to its long length through sparsely populated areas as well as a safety challenge as parts of the canal are open to the public (see fig. 9). To mitigate the safety risk presented by the canal’s swiftly flowing waters, safety lines have been installed to help people climb out of the canal if they fall in. Federal agencies we reviewed have structures that they are not utilizing. However, agencies struggle to dispose of excess structures. In some cases, the agencies may just leave the obsolete structures. For example, many structures at the USDA’s Agricultural Research Service’s Beltsville research facility in Maryland are no longer used but remain in place on the 6,700 acre site. Figure 10 shows a water tower on the site that has been unused for years and that is slowly being recaptured by nature. Federal agencies are required to report 23 separate elements for every one of their structures to FRPP every year, but the data have two types of reliability problems. First, at the most basic level, some of the data agencies submit on their structures are incorrect, undermining agencies’ ability to manage their structures and the reliability of the data in FRPP. Agencies must improve their data quality in accordance with OMB’s guidelines in order to document performance and support decision making. Second, even if agencies effectively apply the OMB guidance, the government-wide data will continue to face reliability problems because of the flexibility built into FRPP guidance on how agencies track key elements, such as defining and counting structures. FRPC chose not to establish a clear definition for structures, but a clear demarcation between buildings and structures would be useful for ensuring that FRPP’s data related to buildings are complete and that agencies do not use the flexibility they have in defining structures to include assets that are more appropriately considered buildings. Better defining structures alone, however, will not change the fact that reasonable differences in how agencies track their structures create inconsistencies when FRPP data are aggregated government-wide. For vital information, it would be worth the time, resources, and effort needed to harmonize agency approaches. However, while agencies need to track structures for their own purposes, it is unclear if it is necessary to aggregate the information government-wide. GSA and OMB officials said that demand for structures information is low, and FRPC only summarizes selected elements of the data annually, and most of those elements relate to buildings not structures. To better ensure the quality of both the more detailed data that agencies collect on their structures and the summary information submitted in the FRPP, we recommend that the Director of OMB, in collaboration with FRPC, develop guidance to improve agencies’ internal controls to produce consistent, accurate, and reliable data on their structures. To better ensure the quality of the data in FRPP and focus agency resources to consistently account for structures, we recommend that the Administrator of GSA, in collaboration with the FRPC, take the following two actions: Issue guidance to federal agencies clarifying the definition of structures. This clarification should ensure that building-like structures are identified as buildings. Assess the feasibility of limiting the data elements agencies would be required to submit for structures submitted to the FRPP. We sent a copy of this report to the Director of the Office of Management and Budget; the Administrator of the General Services Administration; and the Secretaries of Agriculture, Energy, the Interior, Transportation, and Veterans Affairs for their review and comment. OMB generally agreed with our findings and recommendation and made technical comments, which we incorporated as appropriate. GSA agreed with our recommendations and provided its action plan for addressing the recommendations. GSA’s response is reprinted in appendix II. USDA, DOE, Interior, and VA provided technical comments, which we incorporated as appropriate. DOT did not have any comments on the report. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Director of the Office of Management and Budget; the Administrator of the General Services Administration; and the Secretaries of Agriculture, Energy, the Interior, Transportation, and Veterans Affairs. In addition, this report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions, please contact me at (202) 512- 2834 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Our objectives were to determine (1) the scale and scope of federally- owned or leased structures, (2) how federal agencies track and categorize federal structures, and (3) the extent to which the challenges the federal agencies face in managing buildings also apply to structures. To address these objectives, we reviewed pertinent laws, regulations, policies, and other documents related to federal real property management. The primary source of government-wide federal real property information is the Federal Real Property Council’s (FRPC) Federal Real Property Profile (FRPP). We reviewed guidance from the FRPC regarding structures, including the Guidance for Real Property Inventory Reporting for the FRPP. We obtained FRPP summary data from fiscal years 2010 and 2012, the most recent data available, for structures owned and operated by the federal government. We recently reported that the FRPC has not followed data collection practices that would help them collect FRPP data in a way that is sufficiently consistent and accurate to make property management decisions. We recommended that GSA develop a plan to improve FRPP data. GSA agreed with the recommendation but has not yet finished implementing it. Nonetheless, we also found that the FRPP can be used in a general sense to track federal real property. As such, for this report and a similar report using FRPP data, we have determined that FRPP data were sufficiently reliable for limited purposes, such as: identifying agencies within our scope, selecting site visit locations, summarizing agency-level statistics for structures, and to compare against agency source data on structures for our selected agencies. We identified five civilian real property-holding agencies for our review: the Departments of Agriculture (USDA), Energy (DOE), the Interior (Interior), Transportation (DOT), and Veterans Affairs (VA). On the basis of the latest FRPP summary data for federal structures available, these five agencies reported being responsible for approximately 83 percent of all federal civilian structures. We used the following criteria to select these agencies as reported to the FRPC for inclusion in the FRPP: number of structures, diversity in types of structures, and high replacement value and operations and maintenance costs. These agencies gave us a diverse array of structures to review, a high reported dollar value for replacement value (a reported combined $5.9 billion of operating costs per year) and annual operating costs and a mix of challenges, such as sensitive security or critical systems used in operating a large hospital. We excluded the Department of Defense (DOD) agencies because GAO has completed other engagements focused exclusively on DOD real property. We excluded the Department of State as most of its real property holdings, including structures, are outside of the United States. We excluded the Department of Homeland Security, an agency that is responsible for more structures than DOE or VA, from our agency selection as some structures could be security sensitive and we determined that our other selected agencies had a good representation of the different kinds of structures in the FRPP. To determine to the scale and scope of federal structures, we obtained and analyzed FRPP data submissions and other real property data from the five selected agencies; interviewed real property officers at these agencies; visited sites where the agencies had structures; interviewed Office of Management and Budget and General Services Administration staff about the FRPP data for structures; and reviewed FRPC guidance and other documents related to the agencies’ real property data and the FRPP database. We obtained the agencies’ FRPP data submissions for structures for fiscal year 2012. In addition, for select data elements and for structures we saw during our site visits, we obtained real property data for structures from the source databases that each agency uses to generate its annual FRPP submissions. We obtained source system data to compare what was in the FRPP against the data that were in the agency’s own databases for those structures. As we have determined in prior reports on FRPP data, FRPP submissions can only be changed by the agency submitting the data. As a result, we believe that the FRPP submissions obtained from the agencies match the data contained in the FRPP database. In addition, for select data elements, we obtained real property data from the source databases that each agency uses to generate its annual FRPP submissions. We obtained source system data to get information on description, replacement value, operational costs, location, and condition of selected structures. We posed questions to senior real property officers at the five agencies about their processes for collecting and calculating data for structures. To gather detailed examples of structures and to learn about the processes by which data on such properties are collected or calculated and then submitted to the FRPP database, we visited sites where the five agencies we selected had structures. We selected these sites using information from the agencies’ FRPP submissions. Using the most recent FRPP submissions we had at the time for each agency, we selected a non- probability sample of sites. Because this is a non-probability sample, observations made at these site visits do not support generalizations about other properties described in the FRPP database or about the characteristics or limitations of other agencies’ real property data for structures. Rather, the observations made during the site visits provided specific, detailed examples of issues that were described by agency officials regarding the way agencies collect and calculate data for structures. We focused on sites clustered around four cities: Washington, D.C.; Chicago, Illinois; Los Angeles, California; and San Francisco, California. This strategy afforded both geographic diversity and balance among our selected agencies while also accommodating time and resource constraints. In selecting sites and buildings in and around these four cities, we took into account the following factors: We prioritized sites that had multiple selected agency sites where a high concentration of structures was present. This allowed us to see more properties in a limited amount of time. We prioritized the selection of as many different types of structures (as defined in the FRPP) as possible. We also selected sites with high replacement values, high operations and maintenance costs, exceptionally low replacement values or operations and maintenance costs, those reported to be in good and poor condition, and structures registered as historic. We visited at least two sites for each selected agency across our four site-visit areas. In all, we selected 24 sites. Whereas we selected sites based in large part on the numbers and kinds of structures present, the structures we saw at each site depended on additional factors. At some sites, there were too many structures to see them all, given our limited time at each site. In those circumstances, we prioritized structures that were close to one another to see as many structures as we could in the time we had, those structures with high or exceptionally low reported replacement values or high operation and maintenance costs, and different structure types as classified by the agency in FRPP. At several sites, local real property officials identified other structures that we toured and analyzed. Prior to each site visit, we analyzed the FRPP data submissions for the latest year available, and developed questions about the data submissions for local property managers. We also contacted the local property managers to answer those questions and to solidify the exact structures we would see at each location. During our site visits, we interviewed local property managers and compared what we observed at each structure with the FRPP data for that structure, and we took photographs of the structures. In addition to questions about individual properties, we questioned the local officials about the kind of data they collect on the properties, how they collect that data, and how that data differed from the FRPP data we had for the structures at the location. To identify the challenges facing federal agencies in managing structures, we analyzed agency property management reports, strategic plans, and FRPP reports along with statements from agency officials both at agency headquarters and at sites we visited about their challenges. We compared these challenges to those we had identified in our reports about federal real-property management challenges for buildings to determine how similar the agency’s challenges for buildings were to those for managing structures. We conducted this performance audit from January 2013 to January 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the named contact above, Keith Cunningham (Assistant Director), Melissa Bodeau, Anthony Costulas, Anne Doré, Kathleen Gilhooly, Greg Hanna, Robert Heilman, Joshua Ormond, Sara Ann Moessbauer, and Sandra Sokol made key contributions to this report.
The federal government's real property portfolio includes land, buildings, and structures. GAO has designated the management of federal real property as high-risk based largely on the management of federal buildings. However, over half of the assets are structures, such as roads, dams, and radio towers. GAO was asked to examine management issues related to structures. This report examines (1) the scale and scope of federally owned or leased structures, (2) how federal agencies track and categorize federal structures, and (3) the extent to which the challenges federal agencies face in managing buildings also apply to structures. GAO analyzed FRPP data on structures managed by federal civilian agencies against federal internal control standards for executive branch agencies and OMB guidelines, visited 24 sites selected to represent a variety of structure types from five civilian federal agencies with high numbers of structures, and interviewed officials from the five agencies, OMB and GSA about FRPP data collection and how agencies manage their structures. In 2012, federal agencies reported to the Federal Real Property Council (FRPC)--an organization comprised of all real property holding federal agencies--that they are responsible for operating over 480,000 federally owned structures. Information about these structures is recorded in the FRPC's Federal Real Property Profile (FRPP), the government's comprehensive database that describes the nature, use, and extent of federal real property. About 176,000 of those structures are operated by civilian federal agencies. The federal government manages a wide variety of structures. Some of these are common across agencies, such as roads and parking structures, while some are more specific to agencies' missions, such as historic structures or particle accelerators. Agencies take different approaches to defining and inventorying structures making the aggregation of data in the FRPP's database unreliable. Agencies we reviewed defined structures differently leading to inconsistencies in what assets are included in the FRPP, including counting some building-like facilities as structures. We also found that these agencies counted structures differently, provided inaccurate structure location information, and categorized their structures inconsistently, all of which limits the usefulness of the data on structures in the FRPP. Additionally, the agencies we reviewed submitted incorrect information for key data elements, such as the replacement value, annual operating costs, and condition. General Services Administration (GSA) officials who manage the FRPP said that FRPC chose to provide flexibility in the reporting guidance for data on structures to account for the wide diversity in federal structures, but it also aggregates the data as if they were comparable. Even if this data were useful, FRPC reports very little information on structures, and officials at GSA told us that there is low interest in and demand for this information, creating few incentives to improve data reliability. In prior reports, we have stressed the importance of limiting the number of elements to the vital few that are considered essential for producing data for decision making in light of the costs in collecting this data. Agencies generally face similar challenges in managing structures as they do in managing buildings. Officials from all of the selected agencies stated that most challenges centered on prioritizing resources to maintain structures, disposing of excess structures, and ensuring their safety and security. GAO recommends that OMB, in coordination with the FRPC, develop guidance to improve agencies internal controls to produce consistent, accurate and reliable information on their structures. GSA, in coordination with the FRPC, should clarify the definition of structures and assess the feasibility of limiting the data collected on structures submitted to the FRPP. OMB and GSA agreed with the recommendations, and GSA provided an action plan to implement GAO's recommendations.
Congress and the Administration have advocated the use of covert or red team testing in all modes of transportation. Following the terrorist attacks on September 11, 2001, on November 19, 2001, the President signed ATSA into law, with the primary goal of strengthening the security of the nation’s commercial aviation system. ATSA created TSA within the Department of Transportation (DOT) as the agency responsible for securing all modes of transportation. Among other things, ATSA mandated that TSA assume responsibility for screening passengers and their property, which includes the hiring, training, and testing of the screening workforce. ATSA also mandated that TSA conduct annual proficiency reviews and provide for the operational testing of screening personnel, and that TSA provide remedial training to any screener who fails such tests. In 2002, the President issued The National Strategy for Homeland Security that supports developing red team tactics in order to identify vulnerabilities in security measures at our Nation’s critical infrastructure sectors, including the transportation sector. In 2007, TSA issued the TS-SSP that outlines its strategy and associated security programs to secure the transportation sector. While the TS-SSP does not address covert testing in aviation, it does identify that mass transit and passenger rail operators should develop covert testing exercises. Moreover, the Implementing Recommendations of the 9/11 Commission Act of 2007 requires DHS to develop and implement the National Strategy for Railroad Transportation Security, which is to include prioritized goals, actions, objectives, policies, mechanisms, and schedules for assessing the usefulness of covert testing of railroad security systems. Furthermore, the explanatory statement accompanying Division E of the Consolidated Appropriations Act, 2008 (the DHS Appropriations Act, 2008), directs TSA to be more proactive in red teaming for all modes of transportation. Specifically, the statement directs approximately $6 million of TSA’s appropriated funds for red team activities to identify potential vulnerabilities and weaknesses in airports and air cargo facilities, as well as in transit, rail, and ferry systems. Prior to the creation of TSA, the Department of Transportation’s Federal Aviation Administration (FAA) monitored the performance of airport screeners. FAA created the “red team,” as it came to be known, to assess the commercial aviation industry’s compliance with FAA security requirements and to test whether U.S. aviation passenger and checked baggage screening systems were able to detect explosives and other threat items. TSA began its covert testing program in September 2002. TSA’s covert testing program consists of a nationwide commercial aviation testing program conducted by OI, and a local commercial airport testing program implemented by OSO and FSDs at each airport. OI conducts national covert tests of three aspects of aviation security at a commercial airport: (1) passenger checkpoint; (2) checked baggage; and (3) access controls to secure areas and airport perimeters. OI conducts covert tests by having undercover inspectors attempt to pass threat objects, such as guns, knives, and simulated improvised explosive devices (IED), through passenger screening checkpoints and in checked baggage, and to attempt to access secure areas of the airport undetected. OI officials stated that they derived their covert testing protocols and test scenarios from prior FAA red team protocols, but updated the threat items used and increased the difficulty of the tests. According to OI officials, they also began conducting tests at airports on a more frequent basis than FAA. Initially, OI conducted tests at all of the estimated 450 commercial airports nationwide on a 3-year schedule, with the largest and busiest airports being tested each year. TSA also began using threat information to make tests more closely replicate tactics that may be used by terrorists. The number of covert tests that OI conducts during testing at a specific airport varies by the size of the airport. The size of the OI testing teams also varies depending upon the size of the airport being tested, the number of tests that OI plans to conduct, and the number of passenger checkpoints and access points to secure areas at a particular airport. OI testing teams consist of a team leader who observes the tests and leads post-test reviews with TSOs, and inspectors who transport threat items through passenger checkpoints and secure airport areas and record test results. Team leaders usually have previous federal law enforcement experience, while inspectors often include program analysts, administrative personnel, and other TSA personnel. Prior to testing, each team leader briefs their team to ensure that everyone understands their role, the type of test to be conducted, and the threat item they will be using. For tests at passenger checkpoints and in checked baggage, OI uses different IED configurations and places these IEDs in various areas of each inspector’s body and checked baggage to create different test scenarios. Figure 1 provides an overview of TSA’s passenger checkpoint and checked baggage screening operations and equipment. According to OI officials, on the day of testing, OI typically notifies the airport police about one half hour, and the local FSD 5 minutes, before testing begins and instructs them not to notify the TSOs that testing is being conducted. OI officials stated that they provide this notification for security and safety reasons. During passenger checkpoint testing, each team of inspectors carries threat items through the passenger checkpoint. If the TSO identifies the threat item during screening, the inspector identifies him or herself to the TSO and the test is considered a pass. If the TSO does not identify the threat item, the inspector proceeds to the sterile area of the airport and the test is considered a failure. For each test, inspectors record the steps taken by the TSO during the screening process and test results, and the team leader assigns any requirements for remedial training as a consequence of a failed test. The specific types of covert tests conducted by TSA at the passenger checkpoint is sensitive security information and cannot be described in this report. Covert tests of checked baggage are designed to measure the effectiveness of the TSOs’ ability to utilize existing checked baggage screening equipment, not to test the effectiveness of the screening equipment. In covert tests of checked baggage screening, an inspector poses as a passenger and checks their baggage containing a simulated threat item at the airline ticket counter. The bag is then screened by TSOs using one of two checked baggage screening methods. At airports that have explosive detection systems (EDS), the TSO uses these machines to screen each bag. At airports that do not have EDS and at airports where certain screening stations do not have EDS, such as curbside check-in stations, the TSOs use an Explosive Trace Detection (ETD) machine to screen checked baggage. During the ETD screening process of both carry-on and checked baggage, TSOs attempt to detect explosives on passengers’ baggage by swabbing the target area and submitting the swab into the ETD machine for chemical analysis. If the machine detects an explosive substance, it alarms, and produces a readout indicating the specific type of explosive detected. The TSO is then required to resolve the alarm by performing additional screening steps such as conducting a physical search of the bag or conducting further ETD testing on and X-raying of footwear. When testing EDS and ETD screening procedures, OI uses fully assembled objects such as laptop computers, books, or packages. Whether using EDS or ETD, if the TSO fails to identify the threat item, the inspectors immediately identify themselves to stop the checked baggage from being sent for loading onto the aircraft, and the test is considered a failure. If the TSO identifies the threat item, the inspectors also identify themselves and the test is considered a pass. If the OI inspector determines that the test failure was due to the screening equipment not working correctly, the test is considered invalid. OI conducts two types of checked baggage covert tests: Opaque object: This test is designed to determine if a TSO will identify opaque objects on the X-ray screen and conduct a physical search of the checked bag. During these tests, OI inspectors conceal a threat item that cannot be penetrated by the X-ray and appears on the EDS screen as an opaque object among normal travel objects within checked baggage. IED in bag: This test is designed to determine if a TSO will identify an IED during a search of the bag and use proper ETD procedures to identify it as a threat. During these tests, OI inspectors conceal a simulated IED within checked baggage. In addition, the IED may be contained within other objects inside of the bag. OI inspectors conduct covert tests to determine if they can infiltrate secure areas of the airport, such as jet ways or boarding doors to aircraft. Each U.S. commercial airport is divided into different areas with varying levels of security. Secure areas, security identification display areas (SIDA), and air operations areas (AOA) are not to be accessed by passengers, and typically encompass areas near terminal buildings, baggage loading areas, and other areas that are close to parked aircraft and airport facilities, including air traffic control towers and runways used for landing, taking off, or surface maneuvering. Figure 2 is a diagram of the security areas at a typical commercial airport. If inspectors are able to access secure areas of the airport or are not challenged by airport or airline employees, then the test is considered a failure. OI conducts four types of covert tests for airport access controls. Access to SIDA: During these tests, OI inspectors who are not wearing appropriate identification attempt to penetrate the SIDA through access points, such as boarding gates, employee doors, and other entrances leading to secure areas to determine if they are challenged by airport or airline personnel. Access to AOA: During these tests, OI inspectors who are not wearing appropriate identification attempt to penetrate access points leading from public areas to secured areas of the AOA, including vehicle and pedestrian gates through the perimeter fence, cargo areas, and general aviation facilities that provide a direct path to passenger aircraft in secure areas to determine if they are challenged by airport or airline personnel. Access to Aircraft: During these tests, OI inspectors who are not wearing appropriate identification or who do not have a valid boarding pass attempt to penetrate access points past the passenger screening checkpoint which lead directly to aircraft, including boarding gates, employee doors, and jet ways to determine if they are challenged by airport or airline personnel. SIDA Challenges: During these tests, OI inspectors attempt to walk through secure areas of the airport, such as the tarmac and baggage loading areas, without appropriate identification to determine if they are challenged by airport personnel. If not challenged, then the test is considered a failure. After testing at the airport is complete, team leaders conduct post-test reviews with the TSOs, supervisors, and screening managers involved in the testing. These post-test reviews include a hands-on demonstration of the threat items used during each test and provide an opportunity for TSOs to ask questions about the test. According to OI officials, the purpose of these post-test reviews is to serve as a training tool for TSOs. Following the post-test review, OI officials meet with the airport FSD to discuss the test results and any vulnerabilities identified at the airport. OI also provides the FSD with the names of each TSO required to undergo remedial training. OI usually completes all aspects of its covert tests at an airport within several days. After completing tests at each airport, OI staff document test results on standardized data collection instruments and meet to discuss the results and identify the actions that they will recommend to TSA management to address the vulnerabilities identified by the tests. The airport testing data collected are then inputted into a database by OI headquarters staff, who develop reports that summarize the tests results and the vulnerabilities identified. These reports are then presented to TSA management, such as the Administrator. OI staff also regularly brief TSA’s Administrator and management, such as the Assistant Administrator of OSO, on the results of covert tests. Since 2003, when OI completed its first covert testing report, most of OI’s reports contained specific recommendations aimed at addressing the vulnerabilities identified during covert testing. In February 2004, OSO authorized FSDs to conduct their own testing of local passenger and checked baggage screening operations at their airports to serve as a training tool for the TSOs and to measure their performance. Referred to as Screener Training Exercises and Assessments (STEA), FSDs conducted these local covert tests using federal employees, such as TSOs from other local airports and other federal law enforcement officers, and were given discretion to determine the number of tests conducted at their airports, the manner with which the tests were conducted, and the type of tests conducted. OSO considered STEA a tool for training TSOs in detecting threat items, and issued modular bomb kits (MBS II kits) containing simulated IEDs to be used during local testing. During STEA tests, staff placed simulated IEDs in passenger and checked baggage to determine if they would be detected by TSOs. Unlike OI’s national covert tests, STEA tests did not include tests of airport access controls. TSOs that failed STEA tests were required to undergo remedial training. In May 2005, we reported that TSA officials stated that they had not yet begun to use data from STEA testing to identify training and performance needs for TSOs because of difficulties in ensuring that local covert testing was implemented consistently nationwide. For example, because FSDs had discretion regarding the number of tests conducted, some airports conducted STEA tests regularly, while others rarely conducted tests. In addition, we previously reported that FSDs had difficulty in finding enough staff to help conduct STEA tests on a consistent basis. OSO officials recognized the limitations of the STEA program and, as a result, began to re-structure the program in September 2006. This local covert testing program was renamed the Aviation Screening Assessment Program (ASAP). ASAP is designed to test the performance of passenger and checked baggage screening systems and identify security vulnerabilities at each airport. In April 2007, OSO began its initial 6-month cycle of ASAP, in which 1,600 tests were conducted in each grouping of airports—Category X (27 airports), category I (55 airports), and Category II through IV (369 airports). OSO compliance inspectors at each airport conduct the tests. Specific test requirements are distributed to FSDs before the start of each 6-month cycle. These test requirements stipulate the percentage of tests to conduct during peak and non-peak passenger screening periods; the percentage of basic, intermediate, or advanced tests to be conducted; and specific types of threat items that should be used during each type of test, such as IEDs or weapons. Following each test, inspectors are to brief the TSOs, supervisors, and screening managers involved in the tests on the results and notify the FSD of the results. With the first cycle of tests initiated in April 2007, TSA officials plan that any recommendations resulting from ASAP tests will be submitted to OSO management and other offices within TSA that need to know the test results. Although the testing requirements, including the level of frequency and types of tests, will not change during the initial 6-month cycle to preserve the validity of the test results, TSA officials plan to analyze the results of the tests and evaluate the need to revise the structure of the tests or the type of threat items used after testing is complete. According to OSO officials, the first cycle of ASAP tests are complete, but the results are still being analyzed by TSA to determine the overall findings from the tests. TSA’s national and local aviation covert testing programs contribute to TSA’s broader risk management approach for securing the transportation sector by applying principles of risk assessment to identify vulnerabilities in commercial aviation. Risk management is a systematic and analytical process to consider the likelihood that a threat will endanger an asset, individual, or function, and to identify actions to reduce the risk and mitigate the consequences of an attack. Risk management, as applied in the homeland security context, can help federal decision-makers determine where and how to invest limited resources within and among the various modes of transportation. In recent years, the President, through Homeland Security Presidential Directives (HSPD), and laws such as the Intelligence Reform and Terrorism Prevention Act of 2004, have provided that federal agencies with homeland security responsibilities should apply risk-based principles to inform their decision making regarding allocating limited resources and prioritizing security activities. The 9/11 Commission recommended that the U.S. government should identify and evaluate the transportation assets that need to be protected, set risk-based priorities for defending them, select the most practical and cost-effective ways of doing so, and then develop a plan, budget, and funding to implement the effort. In 2002, the President issued The National Strategy for Homeland Security that instructs the federal government to allocate resources in a balanced way to manage risk in our border and transportation security systems while ensuring the expedient flow of goods, services, and people. Further, the Secretary of DHS has made risk-based decision-making a cornerstone of departmental policy. In May 2007, TSA issued the TS-SSP and supporting plans for each mode of transportation that establish a system based risk management approach for securing the transportation sector. We have previously reported that a risk management approach can help to prioritize and focus the programs designed to combat terrorism. A risk assessment, one component of a risk management approach, consists of three primary elements: a vulnerability assessment, a threat assessment, and a criticality assessment. A vulnerability assessment is a process that identifies weaknesses in physical structures, personnel protection systems, processes, or other areas that may be exploited by terrorists, and may suggest options to eliminate or mitigate those weaknesses. TSA uses both national and local aviation covert testing as a method to identify and mitigate security vulnerabilities in the aviation sector. A threat assessment identifies and evaluates threats based on various factors, including capability and intentions as well as the lethality of an attack. Criticality assessment evaluates and prioritizes assets and functions in terms of specific criteria, such as their importance to public safety and the economy, as a basis for identifying which structures or processes require higher or special protection from attack. TSA has designed and implemented risk-based national and local covert testing programs to achieve its goals of identifying vulnerabilities in and measuring the performance of passenger checkpoint and checked baggage screening systems and airport perimeters and access controls, and has begun to determine the extent to which covert testing will be used to identify vulnerabilities and measure the effectiveness of security practices related to non-aviation modes of transportation. OI used information on terrorist threats to design and implement its national covert tests and determine at which airports to conduct tests based on analyses of risks. However, OI inspectors did not systematically record specific causes for test failures related to TSOs, procedures, or screening equipment that did work properly. OI also did not systematically collect and analyze information on effective screening practices that may contribute to TSOs ability to detect threat items. Without systematically recording reasons for test failures, such as failures caused by screening equipment not working properly, as well as reasons for test passes, TSA is limited in its ability to mitigate identified vulnerabilities. TSA recently redesigned its local covert testing program to address limitations in its previous program. The new program, ASAP, should provide TSA with a measure of the performance of passenger and checked baggage screening systems and help to identify security vulnerabilities. Furthermore, TSA has begun to determine the extent to which covert testing will be used to identify vulnerabilities and measure the effectiveness of security practices in non-aviation modes of transportation. While TSA coordinates with domestic and foreign organizations regarding transportation security efforts, they do not have a systematic process in place to coordinate with these organizations regarding covert testing in non-aviation settings, and opportunities for TSA to learn from these organizations’ covert testing efforts exist. OI uses threat assessments and intelligence information to design and implement national covert tests that meet its goals of identifying vulnerabilities in passenger checkpoint and checked baggage screening systems, and airport perimeters and access controls. While OI currently focuses it covert tests on these three areas of aviation security, it has recently begun to establish procedures for the testing of air cargo facilities. According to OI officials, as of March 2008, OI has not yet conducted any tests of air cargo. In designing its covert tests, OI works with DHS’s Transportation Security Laboratory to create threat items to be used during covert tests. OI also uses threat information to replicate tactics that may be used by terrorists. The tactics that OI uses are all designed to test the capabilities of passenger checkpoint and checked baggage screening systems to identify where vulnerabilities exist. The process OI uses to select which airports to test has evolved since covert testing began in September 2002 to focus more on those airports determined to be at greater risk of a terrorist attack. Initially, OI’s goals were to conduct covert tests at all commercial airports, with tests being conducted more frequently at those airports with the largest number of passenger boardings than smaller airports with fewer flights. In August 2005, when TSA began focusing on the most catastrophic threats, OI changed its testing strategy to utilize a risk-based approach to mitigate those threats. OI inspectors record information on the results of national covert tests on data collection instruments after each test is conducted, including the extent to which TSOs properly followed TSA screening procedures and whether the test was passed or failed. After airport testing is complete, OI headquarters analysts input the covert test results into a centralized database. While analysts input whether the test was a pass or a fail and inspectors observations regarding some tests, they do not systematically capture OI’s assessment of the cause of the test failure and include that information in the database. Test failures could be caused by (1) TSOs not properly following existing TSA screening procedures, (2) screening procedures that are not clear to TSOs, (3) screening procedures that lack sufficient guidance to enable TSOs to identify threat items, and (4) screening equipment that does not work properly. Moreover, when inspectors determine the cause of a covert test failure to be due to screening equipment, such as the walk through metal detector, the hand- held metal detector, or ETD not alarming in response to a threat item, OI considers these tests to be invalid. While OI officials stated that they report instances when equipment may not be working properly to the airport FSD and officials from the Transportation Security Laboratory, they do not input that equipment caused the failure in the covert testing database. TSA management may find this information useful in identifying vulnerabilities in the aviation system that relate to screening equipment not working properly. OI officials stated that they do not record information on equipment failures because there is always a possibility that the simulated threat item was not designed properly and therefore should not have set off the alarm. Further, they stated that DHS’s Transportation Security Laboratory is responsible for ensuring that screening equipment is working properly. However, the Laboratory does not test screening equipment at airports in an operational environment. Furthermore, according to OI officials, identifying a single cause for a test failure may be difficult since covert testing failures can be caused by multiple factors. However, in discussions with OI officials about selected individual test results, inspectors were able in their view, in most of these cases, to identify the cause they believed contributed most to the test failure. According to the Standards for Internal Control in the Federal Government, information should be recorded and communicated to management and others in a form and within a time frame that enables them to carry out their internal control and other responsibilities. The Standards further call for pertinent information to be identified, captured, and distributed in a form and time frame that permits people to perform their duties efficiently. By not systematically inputting the specific causes for test failures in its database, including failures due to equipment, OI may be limiting its ability to identify trends that impact screening performance across the aviation security systems tested. In addition to not identifying reasons the inspectors believed caused the test failures, OI officials do not systematically record information on screening practices that may contribute to covert test passes. However, OI inspectors occasionally captured information of effective practices used by TSOs to detect threat items during covert tests in the data collection instruments used during these tests. Further, during covert tests that we observed, OI inspectors routinely discussed with us those practices used during certain tests that they viewed as effective, such as effective communication between TSOs and supervisors in identifying threat items. In 2006, OSO officials requested a TSA internal review of differences in checkpoint screening operations at three airports to identify whether the airports employed certain practices that contributed to their ability to detect threat items during covert tests, among other things. Between June and October 2006, OI’s Internal Reviews Division (IRD) reviewed passenger checkpoint covert test results for each airport, observed airport operations, interviewed TSA personnel, and reviewed documents and information relevant to checkpoint operations. IRD’s review identified a number of key factors that may contribute to an airport’s ability to detect threat items. While IRD conducted this one time review of effective screening practices that may have led to higher test pass rates, OI does not systematically collect information on those practices that may lead to test passes. As discussed earlier in this report, Standards for Internal Control in the Federal Government stated the need for pertinent information to be identified and captured to permit managers to perform their duties efficiently. Without collecting information on effective screening practices that, based on the inspectors’ views, may lead to test passes, TSA managers are limited in their ability to identify measures that could help to improve screening performance across the aviation security system. In April 2007, TSA initiated its local covert testing program, the Aviation Screening Assessment Program (ASAP). TSA is planning to use the results of ASAP as a statistical measure of the performance of passenger checkpoint and checked baggage screening systems, in addition as a tool to identify security vulnerabilities. TSA ASAP guidance applies a standardized methodology for the types and frequency of covert tests to be conducted in order to provide a national statistical sample. If implemented as planned, ASAP should provide TSA with a measure of the performance of passenger and checked baggage screening systems and help identify security vulnerabilities. According to OSO officials, the first cycle of ASAP tests were completed, but the results are still being internally analyzed by TSA to determine the overall findings from the tests. As a result, it is too soon to determine whether ASAP will meet its goals of measuring the performance of passenger and checked baggage screening systems and identifying vulnerabilities. Similar to OI’s national covert testing program, OSO applies elements of risk in designing and implementing ASAP tests. Unlike national covert tests, the ASAP program does not use elements of a risk-based approach to determine the location and frequency of the tests because, according to TSA officials, in order to establish a national baseline against which TSA can measure performance, all airports must be tested consistently and with the same types of tests. OSO officials plan to analyze the results of the tests and evaluate the need to revise the tests or the type of threat items used after the first and second testing cycle and annually thereafter. Furthermore, OSO officials stated that they plan to assess the data, including the types of vulnerabilities identified and the performance of the TSOs in detecting threat items, and develop recommendations for mitigating vulnerabilities and improving screening performance. Officials stated that OSO also plans to conduct follow-up testing to determine whether vulnerabilities that were previously identified have been addressed or if recommendations made were effective. According to TSA’s ASAP guidance, individuals conducting the ASAP tests will be required to identify specific causes for all test failures. In addition to identifying test failures attributed to TSOs, such as the TSO not being attentive to their duties or not following TSA screening procedures, individuals conducting ASAP tests are also required to identify and record causes for failures related to TSOs, screening procedures that TSOs said were not clear or lack sufficient detail to enable them to detect threat items, and screening equipment. OSO officials further stated that they plan to develop performance measures for the ASAP tests after the results of the first 6 month cycle of tests are evaluated. However, officials stated that performance measures for the more difficult category of tests will not be developed because these tests are designed to challenge the aviation security system and the pass rates are expected to be low. Furthermore, TSA officials stated that the results of ASAP tests will not be used to measure the performance of individual TSOs, FSDs, or airports, but rather to measure the performance of the passenger checkpoint and checked baggage screening system. TSA officials stated that there will not be a sufficient number of ASAP tests to measure individual TSO, FSD, or airport performance. We previously reported that TSA had not established performance measures for its national covert testing program and that doing so would enable TSA to focus its improvement efforts on areas determined to be most critical, as 100 percent detection during tests may not be attainable. While TSA has chosen not to establish performance measures for the national covert testing program, as stated above, they plan to develop such measures for only the less difficult ASAP tests. Since the initiation of TSA’s covert testing program in 2002, the agency has focused on testing commercial aviation passenger checkpoints, checked baggage, and airport perimeters and access controls. However, TSA in is the early stages of determining the extent to which covert testing will be used to identify vulnerabilities and measure the effectiveness of security practices in non-aviation modes of transportation. In addition, TSA officials stated that it would be difficult to conduct covert tests in non- aviation modes because these modes typically do not have established security screening procedures to test, such as those in place at airports. Specifically, passengers and their baggage are not generally physically screened through metal detectors and X-rays prior to boarding trains or ferries as they are prior to boarding a commercial aircraft, making it difficult to conduct tests. OI officials also stated that they do not currently have the resources necessary to conduct covert tests in both aviation and non-aviation modes of transportation. Although OI does not regularly conduct covert tests in non-aviation modes of transportation, it has conducted tests during three TSA pilot programs designed to test the feasibility of implementing airport style screening in non-aviation modes of transportation to include mass transit, passenger rail, and maritime ferry facilities. In 2004, TSA conducted a Transit and Rail Inspection pilot program in which passenger and baggage screening procedures were tested on select railways. TSA also tested similar screening procedures at several bus stations during the Bus Explosives Screening Technology pilot in 2005. In addition, TSA has also been testing screening equipment on ferries in the maritime mode through the Secure Automated Inspection Lanes program. According to OI officials, during these three pilot programs, OI conducted covert testing to determine if they could pass threat objects through the piloted passenger screening procedures and equipment. However, these tests were only conducted on a trial basis during these pilot programs. While OI has not developed plans or procedures for testing in non-aviation modes of transportation, the office has begun to explore the types of covert tests that it might conduct if it receives additional resources to test in these modes. In addition to OI, TSA’s Office of Transportation Sector Network Management (TSNM) may have a role in any covert tests that are conducted in non-aviation modes of transportation. TSNM is responsible for securing the nation’s intermodal transportation system and has specific divisions responsible for each mode of transportation—mass transit, maritime, highway and motor carriers, freight rail, pipelines, commercial airports, and commercial airlines. TSNM is also responsible for TSA’s efforts to coordinate with operators in all modes of transportation. A TSNM official stated that TSNM has only begun to consider using covert testing in mass transit. In April 2007, TSA coordinated with the Los Angeles County Metropolitan Transportation Authority, Amtrak, and Los Angeles Sheriff’s Department during a covert test of the effectiveness of security measures at Los Angeles’ Union Station. During the test, several individuals carried threat items, such as simulated IEDs, into the rail system to determine if K-9 patrols, random bag checks, and other random procedures could detect these items. The official from TSNM’s mass transit office stated that the agency is incorporating the use of covert testing as a component of the mass transit and passenger rail national exercise program being developed pursuant to the Implementing Recommendations of the 9/11 Commission Act of 2007. However, TSNM has not developed a strategy or plan for how covert testing will be incorporated into these various programs. The TSNM official further stated that he was not aware of other mass transit or passenger rail operators that are currently conducting or planning covert testing of their systems. Furthermore, TSNM does not have a systematic process in place to coordinate with domestic or foreign transportation organizations to learn from their covert testing experiences. The use of covert or red team testing in non-aviation modes of transportation has been supported in law. The Implementing Recommendations of the 9/11 Commission Act of 2007 directs DHS to develop and implement the National Strategy for Railroad Transportation Security, which is to include prioritized goals, actions, objectives, policies, mechanisms, and schedules for assessing, among other things, the usefulness of covert testing of railroad security systems. Furthermore, the explanatory statement accompanying the Homeland Security Appropriations Act, 2008, directed TSA to be more proactive in red teaming for airports and air cargo facilities, as well as in transit, rail, and ferry systems. Specifically, the statement directed approximately $6 million of TSA’s appropriated amount for red team activities to identify vulnerabilities in airports and air cargo facilities, as well as in transit, rail, and ferry systems. Regarding covert testing of non-aviation modes of transportation, the report of the House of Representatives Appropriations Committee, which accompanies its fiscal year 2008 proposal for DHS appropriations, directed TSA to randomly conduct red team operations at rail, transit, bus, and ferry facilities that receive federal grant funds to ensure that vulnerabilities are identified and corrected. DHS has also identified covert, or red team, testing as a priority for the Department. The President’s July 2002 National Strategy for Homeland Security identified that DHS, working with the intelligence community, should use red team or covert tactics to help identify security vulnerabilities in the nation’s critical infrastructure, which includes the transportation sector. The strategy further identifies that red team techniques will help decision makers view vulnerabilities from the terrorists’ perspective and help to develop security measures to address these security gaps. In addition, TSA’s May 2007 TS-SSP identified that transit agencies should develop meaningful exercises, including covert testing, that test the effectiveness of their response capabilities and coordination with first responders. However, the TS-SSP does not provide any details on the type of covert testing that transit agencies should conduct and does not identify that TSA itself should conduct covert testing in non-aviation modes of transportation. Domestic and foreign transportation organizations and DHS component agencies that we interviewed conduct covert testing to identify and mitigate vulnerabilities in non-aviation settings that lack the standardized passenger screening procedures found in the commercial aviation sector and measure the effectiveness of security measures. Our previous work on passenger rail security identified foreign rail systems that use such covert testing to keep employees alert about their security responsibilities. One of these foreign organizations—the United Kingdom Department for Transport’s Transport Security and Contingencies Directorate (TRANSEC)—conducts covert testing of passenger rail and seaports in addition to aviation facilities to identify vulnerabilities related to people, security processes, and technologies. According to a TRANSEC official, TRANSEC’s non-aviation covert testing includes testing of the nation’s passenger rail system and the United Kingdom’s side of the channel tunnel between the United Kingdom and France. TRANSEC conducts a number of covert tests to determine whether employees are following security procedures established by TRANSEC or the rail operator, whether processes in place assist employees in identifying threat items, and whether screening equipment works properly. A TRANSEC official responsible for the agency’s covert testing program stated that these tests are carried out on a regular basis and are beneficial because, as well as providing objective data on the effectiveness of people and processes, they encourage staff to be vigilant with respect to security. In our September 2005 report on passenger rail security, we recommended that TSA evaluate the potential benefits and applicability—as risk analyses warrant and as opportunities permit—of implementing covert testing processes to evaluate the effectiveness of rail system security personnel. Like TRANSEC in the United Kingdom, TSA has existing security directives that must be followed by passenger rail operators that could be tested. TSA generally agreed with this recommendation. In responding to the recommendation, TSA officials stated that the agency regularly interacts and communicates with its security counterparts in foreign countries to share best practices regarding passenger rail and transit security and will continue to do so in the future. TSA officials further stated that the agency has representatives stationed overseas at U.S. embassies that are knowledgeable about security issues across all modes of transportation. While TSA coordinates with domestic and foreign organizations regarding transportation security efforts, they do not have a systematic process in place to coordinate with these organizations regarding covert testing in non-aviation modes of transportation, and opportunities for TSA to learn from these organizations’ covert testing efforts exist. In the United States, Amtrak has conducted covert tests to identify and mitigate vulnerabilities in their passenger rail system. Amtrak’s Office of Inspector General has conducted covert tests of intercity passenger rail systems to identify vulnerabilities in the system related to security personnel and Amtrak infrastructure. The results from these tests were used to develop security priorities that are currently being implemented by Amtrak. According to an Amtrak official, as the security posture of the organization matures, the covert testing program will shift from identifying vulnerabilities to assessing the performance of existing rail security measures. Transportation industry associations with whom we spoke, who represented various non-aviation modes of transportation, supported the use of covert testing as a means to identify security vulnerabilities and to test existing security measures. Officials from the American Association of Railroads (AAR), which represents U.S. passenger and freight railroads, and the American Public Transportation Association (APTA), which represents the U.S. transit industry, stated that covert testing in the passenger rail and transit industries would help to identify and mitigate security vulnerabilities and increase employee awareness of established security procedures. AAR and APTA officials stated that covert testing might include placing bags and unattended items throughout a rail station or system to see if employees or law enforcement personnel respond appropriately and in accordance with security procedures. AAR and APTA officials further stated that any testing conducted by TSA would require close coordination with rail operators to determine what should be tested, the testing procedures to be used, and the practicality of such testing. Within DHS, the U.S. Customs and Border Protection (CBP) also conducts covert testing at land, sea, and air ports of entry in the United States to test and evaluate CBP’s capabilities to detect and prevent terrorists and illicit radioactive material from entering the United States. According to CBP officials, the purpose of CBP’s covert testing program is to identify potential technological vulnerabilities and procedural weaknesses related to the screening and detection of passengers and containers entering the United States with illicit radioactive material, and to assess CBP officers’ ability to identify potential threats. As of June 2008, CBP tested and evaluated two land border crossings on their capabilities to detect and prevent terrorists and illicit radioactive material from entering the United States. In addition, CBP covertly and overtly evaluated the nation’s 22 busiest seaports for radiation detection and the effectiveness of the non- intrusive imaging radiation equipment deployed at the seaports. CBP officials also stated that the agency is planning to expand testing to address overseas ports that process cargo bound for the United States. In addition to CBP, the DHS Domestic Nuclear Detection Office (DNDO) conducts red team testing to measure the performance of and identify vulnerabilities in equipment and procedures used to detect nuclear and radiological threats in the United States and around the world. According to DNDO officials, the agency uses the results of red team tests to help mitigate security vulnerabilities, such as identifying nuclear detection equipment that is not working correctly. DNDO also uses red team testing to determine if unclassified information exists in open sources, such as on the internet, which could potentially be used by terrorists to exploit vulnerabilities in nuclear detections systems. DNDO’s program, according to its officials, provides a means to assess vulnerabilities that an adversary is likely to exploit, and to make recommendations to either implement or improve security procedures. TSA’s national aviation covert testing program has identified vulnerabilities in select aspects of the commercial aviation security system at airports of all sizes; however, the agency is not fully using the results of these tests to mitigate identified vulnerabilities. The specific results of these tests are classified and are presented in our classified May 2008 report. Covert test failures can be caused by various factors, including TSOs not properly following TSA procedures when screening passengers, screening equipment that does not detect a threat item, or TSA screening procedures that do not provide sufficient detail to enable TSOs to identify the threat item. Senior TSA officials, including TSA’s Administrator, are routinely briefed on the results of covert tests and provided with OI reports that describe the vulnerabilities identified by these tests and recommendations to correct identified vulnerabilities. However, OSO lacks a systematic process to ensure that OI’s recommendations are considered, and does not systematically document its rationale for why it did or did not implement OI’s recommendations. OSO and OI also do not have a process in place to assess whether the corrective action implemented mitigated the identified vulnerabilities through follow-up national or local covert tests, and if covert test results improved. According to OSO officials, TSA has other methods in place to identify whether corrective actions or other changes to the system are effective; however, officials did not provide specific information regarding these methods. Moreover, in those cases where OSO took no action to address OI’s recommendation, they did not systematically document their rationale for why they took no action. In the absence of a systematic process for considering OI’s recommendations, documenting their decision-making process, and evaluating whether corrective actions mitigated identified vulnerabilities, TSA is limited in its ability to use covert testing results to improve the security of the commercial aviation system. OSO senior leadership stated that opportunities exist to improve the agency’s processes in this area. Between September 2002 and June 2007, OI conducted more than 20,000 covert tests of passenger checkpoints, checked baggage screening systems, and airport perimeters and access control points collectively at every commercial airport in the United States regulated by TSA. The results of these tests identified vulnerabilities in select aspects of the commercial aviation security system at airports of all sizes. While the specific results of these tests and the vulnerabilities they identified are classified, covert test failures can be caused by multiple factors, including TSOs not properly following TSA procedures when screening passengers, screening equipment that does not detect a threat item, or TSA screening procedures that do not provide sufficient detail to enable TSOs to identify the threat item. TSA cannot generalize covert test results either to the airports where the tests were conducted or to airports nationwide because the tests were not conducted using the principles of probability sampling. For example, TSA did not randomly select times at which tests were conducted, nor did they randomly select passenger screening checkpoints within the airports. Therefore, each airport’s test results represent a snapshot of the effectiveness of passenger checkpoint screening, checked baggage screening, and airport access control systems, and should not be considered a measurement of any one airport’s performance or any individual TSO’s performance in detecting threat objects. Although the results of the covert tests cannot be generalized to all airports, they can be used to identify vulnerabilities in the aviation security system. TSA officials stated that they do not want airports to achieve a 100 percent pass rate during covert tests because they believe that high pass rates would indicate that covert tests were too easy and therefore were not an effective tool to identify vulnerabilities in the system. After completing its covert tests, OI provides written reports and briefings on the test results to senior TSA management, including TSA’s Administrator, Assistant Administrator of OSO, and area FSDs. In these reports and briefings, OI officials provide TSA management with the results of covert tests, describe the security vulnerabilities identified during the tests, and present recommendations to OSO that OI believes will mitigate the identified vulnerabilities. TSA’s Administrator and senior OSO officials stated that they consider the aviation security system vulnerabilities that OI presents in its reports and briefings as well as the recommendations made. However, OSO officials we spoke with stated that they do not have a systematic process in place to ensure that all of OI’s recommendations are considered or to document their rationale for implementing or not implementing these recommendations. Furthermore, TSA does not have a process in place to assess whether corrective actions taken in response to OI’s recommendations have mitigated identified vulnerabilities. Specifically, in those cases where corrective actions were taken to address OI’s recommendation, neither OSO nor OI conducted follow-up national or local covert tests to determine if the actions taken were effective. For example, in cases where OI determined that additional TSO training was needed and OSO implemented such training, OSO or OI did not conduct follow-up national or local covert testing to determine if the additional training that was implemented to address the recommendation helped to mitigate the identified vulnerability. According to OSO officials, TSA has other methods in place to identify whether corrective actions or other changes are effective; however, officials did not provide specific information regarding these methods. Standards for Internal Control in the Federal Government require that internal controls be designed to ensure that ongoing monitoring occurs during the course of normal operations. Specifically, internal controls direct managers to (1) promptly evaluate and resolve findings from audits and other reviews, including those showing deficiencies and recommendations reported by auditors and others who evaluate agencies’ operations, (2) determine proper actions in response to findings and recommendations from audits and reviews, and (3) complete, within established time frames, all actions that correct or otherwise resolve the matters brought to management’s attention. The standards further identify that the resolution process begins when audit or other review results are reported to management, and is completed only after action has been taken that (1) corrects identified deficiencies, (2) produces improvements, or (3) demonstrates the findings and recommendations do not warrant management action. In the absence of a systematic process for considering and resolving the findings and recommendations from OI’s covert tests and ensuring that the effectiveness of actions taken to address these recommendations are evaluated, TSA management is limited in its ability to mitigate identified vulnerabilities to strengthen the aviation security system. While neither OSO nor OI have a systematic process for tracking the status of OI covert testing recommendations, at our request, OSO officials provided information indicating what actions, if any, were taken to address OI’s recommendations. From March 2003 to June 2007, OI made 43 recommendations to OSO designed to mitigate vulnerabilities identified by national covert tests. To date, OSO has taken actions to implement 25 of these recommendations. For the remaining 18 of OI’s 43 recommendations, OSO either took no action to address the recommendation, or it is unclear how the action they took addressed the recommendation. OI did not make any recommendations to OSO related to screening equipment. The specific vulnerabilities identified by OI during covert tests and the specific recommendations made, as well as corrective actions taken by OSO, are classified. TSA has developed a risk-based covert testing strategy to identify vulnerabilities and measure the performance of select aspects of the aviation security system. OI’s national covert testing program is designed and implemented using elements of a risk-based approach, including using information on terrorist threats to design simulated threat items and tactics. However, this program could be strengthened by ensuring that all of the information from the tests conducted is used to help identify and mitigate security vulnerabilities. For example, without a process for recording and analyzing the specific causes of all national covert test failures, including TSOs not properly following TSA’s existing screening procedures, procedures that are unclear to TSOs, or screening equipment that is not working properly, TSA is limited in its ability to identify specific areas for improvement, such as screening equipment that may be in need of repair or is not working correctly. Moreover, without collecting and analyzing information on effective practices used at airports that performed particularly well on national covert tests, TSA may be missing opportunities to improve TSO performance across the commercial aviation security system. TSA has only recently begun to determine the extent to which covert testing may be used to identify vulnerabilities and measure the effectiveness of security practices in non-aviation modes of transportation if it receives additional resources to test in these modes. Nevertheless, several transportation industry stakeholders can provide useful information on how they currently conduct covert tests in non- aviation settings, and systematically coordinating with these organizations could prove useful for TSA. National aviation covert tests have identified vulnerabilities in the commercial aviation security system. However, TSA could better use the covert testing program to mitigate these vulnerabilities by promptly evaluating and responding to OI’s findings and recommendations. We recognize that TSA must balance a number of competing interests when considering whether to make changes to TSO training, screening procedures, and screening equipment within the commercial aviation security system, including cost and customer service, in addition to security concerns. We further recognize that, in some cases, it may not be feasible or appropriate to implement all of OI’s recommendations. However, without a systematic process in place to consider OI’s recommendations, evaluate whether corrective action is needed to mitigate identified vulnerabilities, and evaluate whether the corrective action effectively addressed the vulnerability, OSO is limited in the extent to which it can use the results of covert tests to improve the security of the commercial aviation system. To help ensure that the results of covert tests are more fully used to mitigate vulnerabilities identified in the transportation security system, we recommended in our May 2008 classified report that the Assistant Secretary of Homeland Security for TSA take the following five actions: Require OI inspectors to document the specific causes of all national covert testing failures—including documenting failures related to TSOs, screening procedures, and equipment—in the covert testing database to help TSA better identify areas for improvement, such as additional TSO training or revisions to screening procedures. Develop a process for collecting, analyzing, and disseminating information on practices in place at those airports that perform well during national and local covert tests in order to assist TSA managers in improving the effectiveness of checkpoint screening operations. As TSA explores the use of covert testing in non-aviation modes of transportation, develop a process to systematically coordinate with domestic and foreign transportation organizations that already conduct these tests to learn from their experiences. Develop a systematic process to ensure that OSO considers all recommendations made by OI in a timely manner as a result of covert tests, and document its rationale for either taking or not taking action to address these recommendations. Require OSO to develop a process for evaluating whether the action taken to implement OI’s recommendations mitigated the vulnerability identified during covert tests, such as using follow-up national or local covert tests to determine if these actions were effective. We provided a draft of this report to DHS for review and comment. On April 24, 2008, we received written comments on the draft report, which are reproduced in full in appendix II. DHS and TSA concurred with the findings and recommendations, and stated that the report will be useful in strengthening TSA’s covert testing programs. In addition, TSA provided technical comments, which we incorporated as appropriate. Regarding our recommendation that OI document the specific causes of all national covert testing failures related to TSOs, screening procedures, and equipment in the covert testing database, DHS stated that TSA’s Office of Inspection (OI) plans to expand the covert testing database to all causes of test failures. DHS further stated that the specific causes of all OI covert testing failures are documented in data collection instruments used during covert tests and within a comment field in the covert testing database when the cause can be determined. However, TSA acknowledged that covert test failures caused by screening equipment not working properly are not recorded in the database in a systematic manner. Documenting test failures caused by equipment should help OI better analyze the specific causes of all national covert testing failures and assist TSA management in identifying corrective actions to mitigate identified vulnerabilities. Concerning our recommendation that OI develop a process for collecting, analyzing, and disseminating information on practices in place at those airports that perform well during national and local covert tests in order to assist TSA managers in improving the effectiveness of checkpoint screening operations, DHS stated that it recognizes the value in identifying factors that may lead to improved screening performance. TSA officials stated that, while OI or ASAP test results can be used to establish a national baseline for screening performance at individual airports, the results are not statistically significant. As a result, additional assessments would be required to provide a statistical measure for individual airports. According to DHS, OI plans to develop a more formal process for collecting and analyzing test results to identify best practices that may lead to test passes. Officials stated that when specific screening practices indicate a positive effect on screening performance, TSA plans to share and institutionalize best practices in the form of management advisories to appropriate TSA managers. Developing a more formal process for collecting and analyzing test results to identify best practices that may lead to test passes should address the intent of this recommendation. In response to our recommendation that TSA develop a process to systematically coordinate with domestic and foreign transportation organizations as the agency explores the use of covert testing in non- aviation modes of transportation to learn from their experiences, DHS stated that it is taking a number of actions. Specifically, according to DHS, TSNM has worked closely with transit agencies and internal TSA covert testing experts during red team testing exercises and is currently exploring programs in which covert testing may be used to evaluate the effectiveness of security measures. For example, TSNM is considering incorporating covert testing as a part of its Intermodal Security Training and Exercise Program. While considering the use of covert testing in its programs should help TSA evaluate the effectiveness of security measures, it is also important that TSA establish a systematic process for coordinating with domestic and foreign organizations that already conduct testing in non-aviation modes of transportation to learn from their experiences. DHS further stated that it plans to take action to address our recommendation that the agency develop a systematic process to ensure that OSO considers all recommendations made by OI as a result of covert tests in a timely manner, and documents its rationale for either taking or not taking action to address these recommendations. Specifically, DHS stated that OSO is coordinating with OI to develop a directive requiring that OI’s covert testing recommendations be formally reviewed and approved by TSA management, and OSO is establishing a database to track all OI recommendations and determine what action, if any, has been taken to address the recommendation. Taking these steps should address the intent of this recommendation and help TSA to more systematically record whether OI’s covert testing recommendations have been addressed. Concerning our recommendation that OSO develop a process to evaluate whether the action taken to implement OI’s recommendations mitigated the vulnerability identified during covert tests, such as using follow-up national or local covert tests or information collected through other methods, to determine if these actions were effective, DHS stated that OSO established a new program to study various aspects of TSO and screening performance in 2007 that considers recommendations originating from OI national covert tests and ASAP tests. According to DHS, after completing each study, recommendations resulting from this analysis will be provided to TSA leadership for consideration. DHS further stated that the results of ASAP tests will also likely be a focus of these future studies. While these actions should help to address the intent of this recommendation, it is also important that OSO assess whether the actions taken to mitigate the vulnerabilities identified by OI’s national covert tests are effective. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies of this report to the Secretary of Homeland Security, Assistant Secretary of DHS for the Transportation Security Administration, and the Ranking Member of the Committee on Homeland Security, House of Representatives, and other interested congressional committees as appropriate. We will also make this report available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3404 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Other key contributors to this report were John Hansen, Assistant Director; Chris Currie; Yanina Golburt; Samantha Goodman; Art James; Wendy Johnson; Thomas Lombardi; and Linda Miller. This report addresses the following questions: (1) what is the Transportation Security Administration’s (TSA) strategy for conducting covert testing of the transportation system, and to what extent has the agency designed and implemented its covert tests to achieve identified goals? and (2) what have been the results of TSA’s national aviation covert tests conducted from September 2002 to June 2007, and to what extent does TSA use the results of these tests to mitigate security vulnerabilities in the commercial aviation system? To identify TSA’s strategy for conducting covert testing of the transportation system and the extent to which the agency has designed and implemented its covert tests to achieve identified goals, we reviewed applicable laws, regulations, policies, and procedures to determine the requirements for conducting covert testing in the transportation sector. To assess TSA’s strategy specifically in the aviation covert testing program, we interviewed TSA Office of Inspection (OI) officials responsible for conducting national covert tests and Office of Security Operations (OSO) officials responsible for local covert tests regarding the extent to which information on risks is included in the design and implementation of tests. We also interviewed the Transportation Security Officers (TSO), supervisors, screening managers, and Federal Security Directors (FSD) who participated in covert tests at each airport where we observed tests to discuss their experience with the national and local covert testing programs. We observed OI inspectors during covert tests at seven airports including airports with heavy passenger traffic and those with just a few flights per day, as well as airports with both federal and contract TSOs. During these observations, we accompanied OI inspectors during all phases of the covert test including planning and observations, testing, and post test reviews with TSOs, supervisors, and screening managers. While these seven airports represent reasonable variations in size and geographic locations, our observations of OI’s covert tests and the perspectives provided by TSA officials at these airports cannot be generalized across all commercial airports. However, our observations at the seven airports provided us an overall understanding of how OI conduct covert tests and useful insights provided by TSOs, their supervisors, and FSDs at these airports. We analyzed TSA documents including established protocols for national and local covert testing, procedures for screening passengers and checked baggage, and OI covert testing reports issued from 2002 to 2007 to identify procedures for designing and implementing TSA’s covert testing program. Furthermore, to determine the extent to which TSA met the goals of the program, we conducted a detailed analysis of the data collection instrument and methods that OI used to collect covert testing data for the seven airports where we observed covert tests. We also assessed the adequacy of TSA’s internal controls for collecting and maintaining the results of covert tests by evaluating TSA’s processes for collecting covert testing data and inputting this data into its database. In assessing the adequacy of internal controls, we used the criteria in GAO’s Standards for Internal Control in the Federal Government, GAO/AIMD 00-21.3.1, dated November 1999. These standards, issued pursuant to the requirements of the Federal Managers’ Financial Integrity Act of 1982 (FMFIA), provide the overall framework for establishing and maintaining internal control in the federal government. Also pursuant to FMFIA, the Office of Management and Budget issued Circular A-123, revised December 21, 2004, to provide the specific requirements for assessing the reporting on internal controls. To assess TSA’s strategy for conducting covert tests in non-aviation modes of transportation, we interviewed officials from TSA’s Office of Transportation Sector Network Management (TSNM) regarding the extent to which TSA has conducted covert testing in non-aviation modes of transportation, the applicability and potential use of covert testing in other modes, and their future plans for conducting covert testing in other modes. To understand how other organizations and federal agencies have used covert testing in the non- aviation arena, we interviewed officials from selected federal agencies and organizations that conduct covert testing including Amtrak, the United Kingdom Department for Transport Security (TRANSEC), U.S. Customs and Border Protection (CBP), DHS Domestic Nuclear Detection Office (DNDO), and select transportation industry associations. We reviewed the president’s National Strategy for Homeland Security and TSA’s Transportation Systems Sector Specific plan, including individual plans for each mode of transportation, to determine the role and use of covert testing across the transportation system. We also reviewed the fiscal year 2008 DHS appropriations legislation, enacted as Division E of the Consolidated Appropriations Act, 2008, and associated committee reports and statements to identify any funding allocated to TSA to conduct covert testing in non-aviation modes. To determine the results of TSA’s national covert tests and the extent to which TSA used the results of these tests to mitigate security vulnerabilities in the aviation system, we obtained and analyzed a database of the results of TSA’s national covert tests conducted from September 2002 to June 2007. We analyzed the test data according to airport category, threat item, and type of test conducted between September 2002 and June 2007. We also examined trends in pass and failure rates when required screening steps were or were not followed and examined differences in covert test results between private and federal airports. We assessed the reliability of TSA’s covert testing data by reviewing existing information about the data and the systems used to produce them, and by interviewing agency officials responsible for maintaining the database. We determined that the data were sufficiently reliable for our analysis and the purposes of this report. TSA provided us with a copy of their covert testing database which contained a table with one record, or entry, per test for all of the tests conducted between 2002 and 2007. In order to accurately interpret the data, we reviewed information provided by OI officials regarding each of the fields recorded in the database and information about how they enter test results into the database. We also conducted manual testing of the data, conducting searches for missing data and outliers. To further assess the reliability of the data, we reviewed the source documents used to initially collect the data as well as OI’s published reports. We also interviewed OI officials regarding how the results of covert tests are used in developing their recommendations to TSA management. We reviewed OI reports on the results of covert tests issued between March 2003 and June 2007 that were submitted to TSA’s Administrator and OSO to identify OI’s recommendations for mitigating the vulnerabilities identified during covert tests. We obtained and analyzed a summary of the actions that OSO had taken to address OI’s recommendations for mitigating vulnerabilities made from March 2003 to June 2007. We also asked officials to discuss the extent to which OSO has addressed and implemented recommendations made by OI based on covert test results, and we analyzed information provided by TSA regarding the status of each covert testing recommendation made by OI from 2003 to 2007. We conducted this performance audit from October 2006 to May 2008, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on out audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives.
The Transportation Security Administration (TSA) uses undercover, or covert, testing to approximate techniques that terrorists may use to identify vulnerabilities in and measure the performance of airport security systems. During these tests, undercover inspectors attempt to pass threat objects through passenger and baggage screening systems, and access secure airport areas. In response to a congressional request, GAO examined (1) TSA's strategy for conducting covert testing of the transportation system and the extent to which the agency has designed and implemented its covert tests to achieve identified goals; and (2) the results of TSA's national aviation covert tests conducted from September 2002 to June 2007, and the extent to which TSA uses the results of these tests to mitigate security vulnerabilities. To conduct this work, GAO analyzed covert testing documents and data and interviewed TSA and transportation industry officials. TSA has designed and implemented risk-based national and local covert testing programs to achieve its goals of identifying vulnerabilities in and measuring the performance the aviation security system, and has begun to determine the extent to which covert testing will be used in non-aviation modes of transportation. TSA's Office of Inspection (OI) used information on terrorist threats to design and implement its national covert tests and determine at which airports to conduct tests based on the likelihood of a terrorist attack. However, OI did not systematically record the causes of test failures or practices that resulted in higher pass rates for tests. Without systematically recording reasons for test failures, such as failures caused by screening equipment not working properly, as well as reasons for test passes, TSA is limited in its ability to mitigate identified vulnerabilities. OI officials stated that identifying a single cause for a test failure is difficult since failures can be caused by multiple factors. TSA recently redesigned its local covert testing program to more effectively measure the performance of passenger and baggage screening systems and identify vulnerabilities. However, it is too early to determine whether the program will meet its goals since it was only recently implemented and TSA is still analyzing the results of initial tests. While TSA has a well established covert testing program in commercial aviation, the agency does not regularly conduct covert tests in non-aviation modes of transportation. Furthermore, select domestic and foreign transportation organizations and DHS components use covert testing to identify security vulnerabilities in non-aviation settings. However, TSA lacks a systematic process for coordinating with these organizations. TSA covert tests conducted from September 2002 to June 2007 have identified vulnerabilities in the commercial aviation system at airports of all sizes, and the agency could more fully use the results of tests to mitigate identified vulnerabilities. While the specific results of these tests and the vulnerabilities they identified are classified, covert test failures can be caused by multiple factors, including screening equipment that does not detect a threat item, Transportation Security Officers (TSOs), formerly known as screeners, not properly following TSA procedures when screening passengers, or TSA screening procedures that do not provide sufficient detail to enable TSOs to identify the threat item. TSA's Administrator and senior officials are routinely briefed on covert test results and are provided with test reports that contain recommendations to address identified vulnerabilities. However, TSA lacks a systematic process to ensure that OI's recommendations are considered and that the rationale for implementing or not implementing OI's recommendations is documented. Without such a process, TSA is limited in its ability to use covert test results to strengthen aviation security. TSA officials stated that opportunities exist to improve the agency's processes in this area. In May 2008, GAO issued a classified report on TSA's covert testing program. That report contained information that was deemed either classified or sensitive. This version of the report summarizes our overall findings and recommendations while omitting classified or sensitive security information.
When providers at VAMCs determine that a veteran needs outpatient specialty care, they request and manage consults using VHA’s clinical consult process. Clinical consults include requests by physicians or other providers for both clinical consultations and procedures. A clinical consultation is a request seeking an opinion, advice, or expertise regarding evaluation or management of a patient’s specific clinical concern, whereas a procedure is a request for a specialty procedure such as a colonoscopy. Clinical consults are typically requested by a veteran’s primary care provider using VHA’s electronic consult system. Once a provider sends a request, VHA requires specialty care providers to review it within 7 days and determine whether to accept the consult. If the specialty care provider accepts the consult—determines the consult is needed and is appropriate—an appointment is made for the patient to receive the consultation or procedure. In some cases, a provider may discontinue a consult for several reasons, including that the care is not needed, the patient refuses care, or the patient is deceased. In other cases the specialty care provider may determine that additional information is needed, and will send the consult back to the requesting provider, who can resubmit the consult with the needed information. Once the appointment is held, VHA’s policy requires the specialty care provider to appropriately document the results of the consult, which would then close out the consult as completed in the electronic system. VHA’s current guideline is that consults should be completed within 90 days of the request. If an appointment is not held, staff are to document why they were unable to complete the consult. In 2012, VHA created a database to capture all consults systemwide and, after reviewing these data, determined that the data were inadequate for monitoring consults. One issue identified was the lack of standard processes and uses of the electronic consult system across VHA. For example, in addition to requesting consults for clinical concerns, the system was also being used to request and manage a variety of administrative tasks, such as requesting patient travel to appointments. Additionally, VHA could not accurately determine whether patients actually received the care they needed or if they received the care in a timely fashion. According to VHA officials, approximately 2 million consults (both clinical and administrative consults) were unresolved for more than 90 days. Subsequently, VA’s Under Secretary for Health convened a task force to address these and other issues regarding VHA’s consult system, among other things. In response to task force recommendations, in May 2013, VHA launched the Consult Management Business Rules Initiative to standardize aspects of the consult process, with the goal of developing consistent and reliable information on consults across all VAMCs. This initiative requires VAMCs to complete four specific tasks between July 1, 2013, and May 1, 2014: Review and properly assign codes to consistently record consult requests in the consult system; Assign distinct identifiers in the electronic consult system to differentiate between clinical and administrative consults; Develop and implement strategies for requesting and managing requests for consults that are not needed within 90 days—known as “future care” consults; and Conduct a clinical review as warranted, and as appropriate, close all unresolved consults—those open more than 90 days. At the time of our December 2012 review, VHA measured outpatient medical appointment wait times as the number of days elapsed from the patient’s or provider’s desired date, as recorded in the VistA scheduling system by VAMCs’ schedulers. In fiscal year 2012, VHA had a goal of completing new and established patient specialty care appointments within 14 days of the desired date. VHA established this goal based on its performance reported in previous years. To facilitate accountability for achieving its wait time goals, VHA includes wait time measures—referred to as performance measures—in its Veterans Integrated Service Network (VISN) directors’ and VAMC directors’ performance contracts, and VA includes measures in its budget submissions and performance reports to Congress and stakeholders. The performance measures, like wait time goals, have changed over time. Officials at VHA’s central office, VISNs, and VAMCs all have oversight responsibilities for the implementation of VHA’s scheduling policy. For example, each VAMC director, or designee, is responsible for ensuring that clinics’ scheduling of medical appointments complies with VHA’s scheduling policy and for ensuring that any staff who can schedule medical appointments in the VistA scheduling system have completed the required VHA scheduler training. In addition to the scheduling policy, VHA has a separate directive that establishes policy on the provision of telephone service related to clinical care, including facilitating telephone access for medical appointment management. Our ongoing work identified examples of delays in veterans receiving requested outpatient specialty care at the five VAMCs we reviewed. VAMC officials cited increased demand for services, and patient no- shows and cancelled appointments, among the factors that hinder their ability to meet VHA’s guideline for completing consults within 90 days. Specifically, several VAMC officials discussed a growing demand for both gastroenterology procedures, such as colonoscopies, as well as consultations for physical therapy evaluations. Additionally, officials noted that due to difficulty in hiring and retaining specialists for these two clinical areas, they have developed periodic backlogs in providing services. Officials at these facilities indicated that they try to mitigate backlogs by referring veterans for care with non-VA providers. However, this strategy does not always prevent delays in veterans receiving timely care. For example, officials from two VAMCs told us that non-VA providers are not always available. Examples of consults that were not completed in 90 days include: For 3 of 10 gastroenterology consults we reviewed for one VAMC, we found that between 140 and 210 days elapsed from the dates the consults were requested to when the patient received care. For the consult that took 210 days, an appointment was not available and the patient was placed on a waiting list before having a screening colonoscopy. For 4 of the 10 physical therapy consults we reviewed for one VAMC, we found that between 108 and 152 days elapsed, with no apparent actions taken to schedule an appointment for the veteran. The patients’ files indicated that due to resource constraints, the clinic was not accepting consults for non-service-connected physical therapy evaluations. In 1 of these cases, several months passed before the veteran was referred to non-VA care, and he was seen 252 days after the initial consult request. In the other 3 cases, the physical therapy clinic sent the consults back to the requesting provider, and the veterans did not receive care for that consult. For all 10 of the cardiology consults we reviewed for one VAMC, we found that staff initially scheduled patients for appointments between 33 and 90 days after the request, but medical files indicated that patients either cancelled or did not show for their initial appointments. In several instances patients cancelled multiple times. In 4 of the cases VAMC staff closed the consults without the patients being seen; in the other 6 cases VAMC staff rescheduled the appointments for times that exceeded the 90-day timeframe. Our ongoing work also identified variation in how the five VAMCs we reviewed have implemented key aspects of VHA’s business rules, which limits the usefulness of the data in monitoring and overseeing consults systemwide. As previously noted, VHA’s business rules were designed to standardize aspects of the consult process, thus creating consistency in VAMCs’ management of consults. However, VAMCs have reported variation in how they are implementing certain tasks required by the business rules. For example, VAMCs have developed different strategies for managing future care consults—requests for specialty care appointments that are not clinically needed for more than 90 days. At one VAMC, officials reported that specialty care providers have been instructed to discontinue consults for appointments that are not needed within 90 days and requesting providers are to track these consults outside of the electronic consult system and resubmit them closer to the date the appointment is needed. These consults would not appear in VHA’s systemwide data once they have been discontinued. At another VAMC, officials stated that appointments for specialty care consults are scheduled regardless of whether the appointments are needed beyond 90 days. These future care consults would appear in VHA consult data and would eventually appear on a timeliness report as consults open greater than 90 days. Officials from this VAMC stated that they continually have to explain to VISN officials who monitor the VAMC’s consult timeliness that these open consults do not necessarily mean that care has been delayed. Officials from another VAMC reported piloting a strategy in its gastroenterology clinic where future care consults are entered in an electronic system separate from the consult and appointment scheduling systems. Approximately 30 to 60 days before the care is needed the requesting provider is notified to enter the consult request in the electronic consult system for the specialty care provider to complete. In addition, oversight of the implementation of VHA’s business rules has been limited and has not included independent verification of VAMC actions. VAMCs were required to self-certify completion of each of the four tasks outlined in the business rules. VISNs were not required to independently verify that VAMCs appropriately completed the tasks. Without independent verification, VHA cannot be assured that VAMCs implemented the tasks correctly. Furthermore, VHA did not require that VAMCs document how they addressed unresolved consults that were open greater than 90 days, and none of the five VAMCs in our review were able to provide us with specific documentation in this regard. VHA officials estimated that as of April 2014, about 450,000 of the approximately 2 million consults (both clinical and administrative consults) remained unresolved systemwide. VAMC officials noted several reasons that consults were either completed or discontinued in this process of addressing unresolved consults, including improper recording of consult notes, patient cancellations, and patient deaths. At one of the VAMCs we reviewed, a specialty care clinic discontinued 18 consults the same day that a task for addressing unresolved consults was due. Three of these 18 consults were part of our random sample, and our review found no indication that a clinical review was conducted prior to the consults being discontinued. Ultimately, the lack of independent verification and documentation of how VAMCs addressed these unresolved consults may have resulted in VHA consult data that inaccurately reflected whether patients received the care needed or received it in a timely manner. Although VHA’s business rules were intended to create consistency in VAMCs’ consult data, our preliminary observations identified variation in managing key aspects of consult management that are not addressed by the business rules. For example, there are no detailed systemwide VHA policies on how to handle patient no-shows and cancelled appointments, particularly when patients repeatedly miss appointments, which may make VAMCs’ consult data difficult to assess. For example, if a patient cancels multiple specialty care appointments, the associated consult would remain open and could inappropriately suggest delays in care. To manage this type of situation, one VAMC developed a local consult policy referred to as the “1-1-30” rule. The rule states that a patient must receive at least 1 letter and 1 phone call, and be granted 30 days to contact the VAMC to schedule a specialty care appointment. If the patient fails to do so within this time frame, the specialty care provider may discontinue the consult. According to VAMC officials, several of the consults we reviewed would have been discontinued before reaching the 90-day threshold if the 1-1-30 rule had been in place at the time. Three VAMCs included in our review also noted some type of policy addressing patient no-shows and cancelled appointments, each of which varied in its requirements. Without a standard policy across VHA addressing patient no-shows and cancelled appointments, VHA consult data may reflect numerous variations of how VAMCs handle patient no-shows and cancelled appointments. In December 2012, we reported that VHA’s reported outpatient medical appointment wait times were unreliable and that inconsistent implementation of VHA’s scheduling policy may have resulted in increased wait times or delays in scheduling timely outpatient medical appointments. Specifically, we found that VHA’s reported wait times were unreliable because of problems with recording the appointment desired date in the scheduling system. Since, at the time of our review, VHA measured medical appointment wait times as the number of days elapsed from the desired date, the reliability of reported wait time performance was dependent on the consistency with which VAMC schedulers recorded the desired date in the VistA scheduling system. However, VHA’s scheduling policy and training documents were unclear and did not ensure consistent use of the desired date. Some schedulers at VAMCs that we visited did not record the desired date correctly. For example, the desired date was recorded based on appointment availability, which would have resulted in a reported wait time that was shorter than the patient actually experienced. At each of the four VAMCs we visited, we also found inconsistent implementation of VHA’s scheduling policy, which impeded scheduling of timely medical appointments. For example, we found the electronic wait list was not always used to track new patients that needed medical appointments as required by VHA scheduling policy, putting these patients at risk for delays in care. Furthermore, VAMCs’ oversight of compliance with VHA’s scheduling policy, such as ensuring the completion of required scheduler training, was inconsistent across facilities. VAMCs also described other problems with scheduling timely medical appointments, including VHA’s outdated and inefficient scheduling system, gaps in scheduler and provider staffing, and issues with telephone access. For example, officials at all VAMCs we visited reported that high call volumes and a lack of staff dedicated to answering the telephones affected their ability to schedule timely medical appointments. VA concurred with the four recommendations included in our December 2012 report and reported continuing actions to address them. First, we recommended that the Secretary of VA direct the Under Secretary for Health to take actions to improve the reliability of its outpatient medical appointment wait time measures. In response, VHA officials stated that they implemented more reliable measures of patient wait times for primary and specialty care. In fiscal years 2013 and 2014, primary and specialty care appointments for new patients have been measured using time stamps from the VistA scheduling system to report the time elapsed between the date the appointment was created—instead of the desired date—and the date the appointment was completed. VHA officials stated that they made the change from using desired date to creation date based on a study that showed a significant association between new patient wait times using the date the appointment was created and self-reported patient satisfaction with the timeliness of VHA appointments. VA, in its FY 2013 Performance and Accountability Report, reported that VHA completed 40 percent of new patient specialty care appointments within 14 days of the date the appointment was created in fiscal year 2013; in contrast, VHA completed 90 percent of new patient specialty care appointments within 14 days of the desired date in fiscal year 2012. VHA also modified its measurement of wait times for established patients, keeping the appointment desired date as the starting point, and using the date of the pending scheduled appointment, instead of the date of the completed appointment, as the end date for both primary and specialty care. VHA officials stated that they decided to use the pending appointment date instead of the completed appointment date because the pending appointment date does not include the time accrued by patient no-shows and cancelled appointments. Second, we recommended that the Secretary of VA direct the Under Secretary for Health to take actions to ensure VAMCs consistently implement VHA’s scheduling policy and ensure that all staff complete required training. In response, VHA officials stated that the department is in the process of revising the VHA scheduling policy to include changes, such as the new methodology for measuring wait times, and improvements and standardization of the use of the electronic wait list. In the interim, VHA distributed guidance, via memo, to VAMCs in March 2013 describing this information and also offered webinars to VHA staff on eight dates in April and May of 2013. To assist VISNs and VAMCs in the task of verifying that all staff have completed required scheduler training, VHA has developed a database that will allow a VAMC to identify all staff that have scheduled appointments and the volume of appointments scheduled by each; VAMC staff can then compare this information to the list of staff that have completed the required training. However, VHA officials have not established a target date for when this database would be made available for use by VAMCs. Third, we recommended that the Secretary of VA direct the Under Secretary for Health to take actions to require VAMCs to routinely assess scheduling needs for purposes of allocation of staffing resources. VHA officials stated that they are continuing to work on identifying the best methodology to carry out this recommendation, but stated that the database that tracks the volume of appointments scheduled by individual staff also may prove to be a viable tool to assess staffing needs and the allocation of resources. VHA officials stated that they needed to discuss further how VAMCs could use this tool, and that they had not established a targeted completion date for actions to address this recommendation. Finally, we recommended that the Secretary of VA direct the Under Secretary for Health to take actions to ensure that VAMCs provide oversight of telephone access, and implement best practices to improve telephone access for clinical care. In response, VHA required each VISN director to require VAMCs to assess their current telephone service against the VHA telephone improvement guide and to electronically post an improvement plan with quarterly updates. VAMCs are required to routinely update progress on the improvement plan. VHA officials cited improvement in telephone response and call abandonment rates since VAMCs were required to implement improvement plans. Additionally, VHA officials said that the department has also contracted with an outside vendor to assess VHA’s telephone infrastructure and business process. VHA expects to receive the first report in approximately 2 months. Although VA has initiated actions to address our recommendations, we believe that continued work is needed to ensure these actions are fully implemented in a timely fashion. Furthermore, it is important that VA assess the extent to which these actions are achieving improvements in medical appointment wait times and scheduling oversight as intended. Ultimately, VHA’s ability to ensure and accurately monitor access to timely medical appointments is critical to ensuring quality health care to veterans, who may have medical conditions that worsen if access is delayed. Chairman Miller, Ranking Member Michaud, and Members of the Committee, this concludes my statement. I would be pleased to respond to any questions you may have. For further information about this statement, please contact Debra A. Draper at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Key contributors to this statement were Bonnie Anderson, Assistant Director; Janina Austin, Assistant Director; Rebecca Abela; Jennie Apter; Jacquelyn Hamilton; David Lichtenfeld; Brienne Tierney; and Ann Tynan. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Access to timely medical appointments is critical to ensuring that veterans obtain needed medical care. Over the past few years there have been numerous reports of VAMCs failing to provide timely care to patients, including specialty care, and in some cases, these delays have resulted in harm to patients. In December 2012, GAO reported that improvements were needed in the reliability of VHA's reported medical appointment wait times, as well as oversight of the appointment scheduling process. Also in 2012, VHA found that systemwide consult data could not be adequately used to determine the extent to which veterans experienced delays in receiving outpatient specialty care. In May 2013, VHA launched the Consult Management Business Rules Initiative with the aim of standardizing aspects of the consults process. This testimony highlights (1) preliminary observations from GAO's ongoing work related to VHA's management of outpatient specialty care consults, and (2) concerns GAO raised in its December 2012 report regarding VHA's outpatient medical appointment scheduling, and progress made implementing GAO's recommendations. To conduct this work, GAO reviewed documents and interviewed officials from VHA's central office. Additionally, GAO interviewed officials from five VAMCs for the consults work and four VAMCs for the scheduling work that varied based on size, complexity, and location. GAO shared the information it used to prepare this statement with VA and incorporated its comments as appropriate. GAO's ongoing work examining VHA's management of outpatient specialty care consults identified examples of delays in veterans receiving outpatient specialty care, as well as limitations in the Department of Veterans Affairs' (VA), Veterans Health Administration's (VHA) implementation of new consult business rules designed to standardize aspects of the clinical consult process. For example, for 4 of the 10 physical therapy consults GAO reviewed for one VAMC, between 108 and 152 days elapsed with no apparent actions taken to schedule an appointment for the veteran. For 1 of these consults, several months passed before the veteran was referred for care to a non-VA health care facility. VA medical center (VAMC) officials cited increased demand for services, and patient no-shows and cancelled appointments among the factors that lead to delays and hinder their ability to meet VHA's guideline of completing consults within 90 days of being requested. GAO's ongoing work also identified variation in how the five VAMCs reviewed have implemented key aspects of VHA's business rules, such as strategies for managing future care consults—requests for specialty care appointments that are not clinically needed for more than 90 days. Such variation may limit the usefulness of VHA's data in monitoring and overseeing consults systemwide. Furthermore, oversight of the implementation of the business rules has been limited and did not include independent verification of VAMC actions. Because this work is ongoing, we are not making recommendations on VHA's consult process at this time. In December 2012, GAO reported that VHA's outpatient medical appointment wait times were unreliable. The reliability of reported wait time performance measures was dependent in part on the consistency with which schedulers recorded desired date—defined as the date on which the patient or health care provider wants the patient to be seen—in the scheduling system. However, VHA's scheduling policy and training documents were unclear and did not ensure consistent use of the desired date. GAO also reported that inconsistent implementation of VHA's scheduling policy may have resulted in increased wait times or delays in scheduling timely medical appointments. For example, GAO identified clinics that did not use the electronic wait list to track new patients in need of medical appointments as required by VHA policy, putting these patients at risk for not receiving timely care. VA concurred with the four recommendations included in the report and, in April 2014, reported continued actions to address them. For example, in response to GAO's recommendation for VA to take actions to improve the reliability of its medical appointment wait time measures, officials stated the department has implemented new patient wait time measures that no longer rely on desired date recorded by a scheduler. VHA officials stated that the department also is continuing to address GAO's three additional recommendations. Although VA has initiated actions to address GAO's recommendations, continued work is needed to ensure these actions are fully implemented in a timely fashion. Ultimately, VHA's ability to ensure and accurately monitor access to timely medical appointments is critical to ensuring quality health care to veterans, who may have medical conditions that worsen if access is delayed.
Before I discuss these issues in detail, let me sketch the background of the EAS and SCASDP programs. Mr. Chairman, as you know, Congress established EAS as part of the Airline Deregulation Act of 1978 to help areas that face limited service. The act guaranteed that for 10 years communities served by air carriers before deregulation would continue to receive a certain level of scheduled air service by authorizing DOT to require carriers to continue providing service at these communities. If an air carrier could not continue that service without incurring a loss, DOT could then use EAS funds to award that carrier a subsidy. In 1987, Congress extended the program for another 10 years, and in 1998, it eliminated the sunset provision, thereby permanently authorizing EAS. To be eligible for this subsidized service, communities must meet three general requirements. They (1) must have received scheduled commercial passenger service as of October 1978, (2) may be no closer than 70 highway miles to a medium- or large-hub airport, and (3) must require a subsidy of less than $200 per person (unless the community is more than 210 highway miles from the nearest medium- or large-hub airport, in which case no average per-passenger dollar limit applies). Air carriers apply to DOT for EAS subsidies. DOT selects a carrier and sets a subsidy amount to cover the difference between the carrier’s projected cost of operation and its expected passenger revenues, while providing the carrier with a profit element equal to 5 percent of total operating expenses, according to statute. Funding for EAS has come from a combination of permanent and annual appropriations. The Federal Aviation Reauthorization Act of 1996 (P.L. 104-264) permanently appropriated the first $50 million of such funding— for EAS and safety projects at rural airports—from the collection of overflight fees. Congress has appropriated additional funds from the general fund on an annual basis. The Department of Transportation’s reauthorization proposal suggests changing the source of program funding to a mandatory appropriation of $50 million per year from the Airport and Airway Trust Fund. A new, small aviation fuel tax would be used to generate this $50 million. Furthermore, according to DOT officials, since $50 million would not sufficiently support all currently subsidized service, communities would be ranked in order of isolation, with Alaskan communities at the top of the list. Thus, some of the EAS communities currently receiving EAS subsidies under the roughly $100 million Congress has appropriated in recent years, might no longer receive air service. Turning now to SCASDP, Congress authorized it as a pilot program in the Wendell H. Ford Aviation Investment and Reform Act for the 21st Century (AIR-21), to help small communities enhance their air service. AIR-21 authorized the program for fiscal years 2002 and 2003, and subsequent legislation reauthorized the program through fiscal year 2008 and eliminated the “pilot” status of the program. The Office of Aviation Analysis in DOT’s Office of the Secretary is responsible for administering the program. The law establishing SCASDP allows DOT considerable flexibility in implementing the program and selecting projects to be funded. The law defines basic eligibility criteria and statutory priority factors, but meeting a given number of priority factors does not automatically mean DOT will select a project. SCASDP grants may be made to single communities or a consortium of communities, although no more than four grants each year may be in the same state. Both small hubs and non hubs are eligible for this program. Thus, small hubs, such as Buffalo Niagara International Airport in Buffalo, New York, which enplaned over 2.4 million passengers in 2005, and small, nonhub airports, such in Moab, Utah (with about 2,600 enplanements) are eligible. SCASDP grants are available in the 50 states, the District of Columbia, Puerto Rico, and U.S. territories and possessions. DOT’s SCASDP awards have been geographically dispersed. Figure 1 shows the location of all SCASDP grants awarded as of August 31, 2006, as well as communities receiving EAS subsidies as of April 1, 2007. Mr. Chairman, as you know EAS provides service to many communities that otherwise would not receive air service. However, the increase in the number of communities receiving subsidies and the cost of these subsidies raise concerns over the funding needed to provide this service in an environment of federal deficits. For example, the funding for EAS has grown from $25.9 million in 1997 to $109.4 million in 2007. Furthermore, the federal median subsidy for providing air service to EAS communities is about $98 per passenger; the subsidies varied among communities from about $13 to over $677 per passenger in 2006. Finally, the number of air carriers flying smaller aircraft suitable for EAS communities may decrease and some industry officials are beginning to voice concerns about the availability of appropriate planes to provide small community air service in the future. In fiscal year 2007, EAS provided subsidies to 145 communities. In fiscal year 2005, the most recent year for which passenger data is available, the EAS program supported over 1 million passengers. As we have noted in past reports, if EAS subsidies were removed, air service might end at many small communities. Since air carriers have to show financial data to support a subsidy calculation—proving the service is not profitable to run—it is likely that if the subsidy is no longer available commercial air service would end. Several factors may help explain why some small communities, especially nonhubs, face relatively limited air service. First, small communities can become cost-cutting targets of air carriers because they are often a carrier’s least profitable operation. Consequently, many network carriers have cut service to small communities, replaced by regional carriers. Second, the “Commuter Rule” that FAA enacted in 1995 brought small commuter aircraft under the same safety standards as larger aircraft—a change that made it more difficult to economically operate smaller aircraft, such as 19-seat turboprops. For example, the Commuter Rule required commuter air carriers that flew aircraft equipped with 10 or more seats to improve ground deicing programs and carry additional passenger safety equipment. Additionally, the 2001 Aviation and Transportation Security Act instituted the same security requirements for screening passengers at smaller airports as it did for larger airports, sometimes making travel from small airports less convenient than it had been. Third, regional carriers have used fewer turboprops in favor of regional jets, which had a negative effect on small communities that have not generated the passenger levels needed to support regional jet service. Finally, many small communities experience passenger “leakage”—that is, passengers choosing to drive longer distances to larger airports instead of using closer small airports. Low-cost carriers have generally avoided flying to small communities but have offered low fares that encourage passengers to drive longer distances to take advantage of them. Mr. Chairman, although less than the 405 communities served with the help of EAS subsidies in 1980, the number of communities served by EAS has grown over the past 10 years, as has the amount of funds appropriated for the program. As shown in table 1, for fiscal year 2007, EAS is providing subsidies to air carriers to serve 145 communities—an increase of 50 communities over the 1997 low point. The funding for EAS has also grown from $25.9 million in 1997 to $109.4 million in 2007. Excluding Alaska, this amounts to an average of about $754,500 per EAS community in fiscal year 2007. Appendix I lists EAS communities and their current subsidy amounts. While the total number of communities receiving service through EAS subsidies has generally increased, some communities have dropped from the program. For example, according to DOT officials 11 communities that had EAS subsidized service in 2006 were no longer in the program in 2007. Four of these were terminated by DOT because their subsidy rose above the EAS cap—Bluefield, WV; Enid, OK; Moses Lake, WA; and Ponca City, OK. Seven communities secured non-subsidized service. These communities included Hana, HI; Kalaupapa, HI; Kamuela, HI; Pierre, SD; Riverton, WY; Rock Springs, WY; and Sheridan, WY. The level of subsidy per passenger at EAS communities varies greatly. At some locations, the level of subsidy per passenger is modest. For example, in 2006, of the 110 airports receiving EAS service for which data were available, 30 communities had subsidies of less than $50 per passenger. Some communities with relatively low subsidies per passenger included Escanaba, MI ($12.96) and Morgantown, WV ($13.68) both with almost 36 passengers per day. In contrast, 30 communities also had subsidies per passenger greater than $200. The highest subsidy at that time was $677 for Brookings, SD, and Lewistown, MT had an average subsidy of almost $473. These two areas had fewer than 3 passengers per day. Airports may maintain EAS service when subsidies exceed $200 dollars if they are more than 210 highway miles from a large or medium hub. As would be expected, a low number of passengers are associated with high subsidies. Of the 110 airports receiving EAS service for which data were available, 17 airports had fewer than 5 passengers per day. Such airports typically have a subsidy per passenger greater than $200—15 of the 17 exceed the $200 threshold. Communities with less than 5 passengers per day also constitute half those with subsidies exceeding $200 (15 of 30). In contrast, 47 communities had at least 20 passengers per day, more than the capacity of a single 19-seat aircraft flight. All 47 of these airports had subsidies of less than $100 per passenger. See Appendix II for EAS Subsidies per Enplanement. DOT and industry officials we interviewed raised questions about the future of the EAS service as currently provided. As of April 1, 2007, 12 regional air carriers served the subsidized communities in the continental United States. The carriers serving the communities in the continental United States typically used turboprop aircraft seating 19 passengers, whereas in Alaska and Puerto Rico, the most commonly used aircraft seated 4 to 9 passengers. DOT and industry officials pointed out that 19-seat aircraft are no longer being manufactured, and some of the current EAS carriers appear to be migrating to the use of larger aircraft. DOT officials noted that EAS carriers are getting out of the business that uses 19-seat aircraft, and are moving into larger aircraft. In addition, industry consultants noted that as the current fleet of 19-seat aircraft ages, maintenance costs will likely rise, which will make operating 19-seat aircraft more expensive. Because 19- seat aircraft are the backbone of EAS service in the contiguous 48 states, their aging or discontinuation would significantly affect the program. Figure 2 shows an example of a 19-seat Turbo Prop aircraft commonly used to provide EAS service. Finally, DOT and industry officials with whom we spoke were not convinced that the emerging technology of Very Light Jets (VLJs) could fill this gap, especially in the short term. They noted that current business models discussed for VLJs did not anticipate their use for the kind of small communities served by EAS. DOT did provide a SCASDP grant to Bismarck, ND for developing a business model for point to point, reservation responsive air service using VLJs. The grantee has developed the business plan; however, given the lack of operating VLJs, they changed the type of aircraft the business would use until the aircraft become more available. We will be completing a more comprehensive report on VLJs for the subcommittee later this year. Mr. Chairman, we found that SCASDP grantees pursued several goals and strategies to improve air service, and that air service was sustained after the grant expired in a little less than half of the 23 completed projects in 2005—the time of our initial review. The DOT IG’s office began reviewing completed grants in March 2007 which should provide more information on the results of completed grants. Although the program has seen some success, the number of applications for SCASDP grants has declined for a variety of reasons. At the time of our initial review of SCASDP, in 2005, it was too soon to determine the overall effectiveness of the program because there was not much information available about the “post” grant period. Once awarded, it may take several years for grants to be implemented and completed. There have been 182 grant awards made in the 5 years of the program. Of these, 74 grants are completed as of April 1, 2007—34 from 2002, 19 from 2003, and 21 from 2004. No grants from 2005 or 2006 are yet completed. In addition, as of April 4, 2007, DOT had terminated seven grants it initially awarded. See Appendix III for a list of all SCASDP grants from 2002 through 2006. Our review of the 23 projects completed by September 30, 2005, found some successful results. The kinds of improvements in service that resulted from the grants included adding an additional air carrier, destination, or flights; or changing the type of aircraft serving the community. In terms of numbers, airport officials reported that 19 of the 23 grants resulted in service or fare improvements during the life of the grant (see fig.3). In addition, during the course of the grant, enplanements rose at 19 of the 23 airports. After the 23 SCASDP grants were completed, 14 resulted in improvements that were still in place. Three of these improvements were not self-sustaining; thus 11 self-sustaining improvements were in place after the grants were completed. Since our review of the 23 completed projects, 51 more have been completed for a total of 74. We reviewed the fifty-nine available final reports. A review of the grantees’ final reports for these projects indicated that 48 increased enplanements as a result of their SCASDP grant. For SCASDP grants DOT awarded from 2002 though 2004, we surveyed airport officials to identify the goals they had for their grants. We found that grantees had identified a variety of project goals to improve air service to their community. These goals included adding flights, airlines, and destinations; lowering fares; upgrading the aircraft serving the community; obtaining better data for planning and marketing air service; increasing enplanements; and curbing the loss of passengers to other airports. (See fig. 4 for the number and types of project goals identified by airport directors.) Finally, in our 2005 report, we recommended DOT evaluate the SCASDP grants after more were completed to identify promising approaches and evaluate the effectiveness of the program. DOT officials told us that they asked the DOT IG to conduct such a study, which the IG began in March 2007. DOT expects to have preliminary observations available by the middle of May. Results from this work may help identify potential improvements and “lessons learned.” To achieve their goals, grantees have used many strategies, including subsidies and revenue guarantees to the airlines, marketing, hiring personnel and consultants, and establishing travel banks in which a community guarantees to buy a certain number of tickets. (See fig. 5.) In addition, other strategies that grantees have used are subsidizing the start- up of an airline, taking over ground station operations for an airline, and subsidizing a bus to transport passengers from their airport to a hub airport. Incorporating marketing as part of the project was the most common strategy used by airports. Some airline officials said that marketing efforts are important for the success of the projects. Airline officials also told us that projects that provide direct benefits to an airline, such as revenue guarantees and financial subsidies, have the greatest chance of success. According to these officials, such projects allow the airline to test the real market for air service in a community without enduring the typical financial losses that occur when new air service is introduced. They further noted that, in the current aviation economic environment, carriers cannot afford to sustain losses while they build up passenger demand in a market. The outcomes of the grants may be affected by broader industry factors that are independent of the grant itself, such as a decision on the part of an airline to reduce the number of flights at a hub. Since the inception of the program, there has been a steady decline in the number of applications. In 2002 (the first year SCASDP was funded) DOT received 179 applications for grants; and by 2006 the number of applications had declined to 75. Grant applications for 2007 are not due until April 27, 2007. According to a DOT official, almost all applications arrive on the last day, so the number of 2007 applications cannot be estimated at this time. DOT officials said that the past decline was, in part, a consequence of several factors, including: (1) many eligible airport communities had received a grant and were still implementing projects at the time; (2) the airport community as a whole was coming to understand the importance DOT places on fulfilling the local contribution commitment part of the grant proposal; and (3) statutory changes in 2003 that prohibited communities or consortiums from receiving more than one grant for the same project, and that established the timely use of funds as a priority factor in awarding grants. According to DOT officials, DOT has interpreted that a project is the “same project” if it employs the same strategy. For example, once a community has used a revenue guarantee, it cannot use a revenue guarantee on another project. A DOT official noted that, with many communities now completing their grants, they may choose to apply for another grant. Some communities have received second grants; however DOT officials indicate first time applicants get more weight in the grant selection process. Revisiting selection criteria may increase the access to SCASDP grants and increase service to small communities. Mr. Chairman, let me now turn to a discussion of options both for the reform of EAS and the evaluation of SCASDP. I raise these options, in part, because they link to our report on the challenges facing the federal government in the 21st century, which notes that the federal government’s long-term fiscal imbalance presents enormous challenges to the nation’s ability to respond to emerging forces reshaping American society, the United States’ place in the world, and the future role of the federal government. In that report, we call for a more fundamental and periodic reexamination of the base of government, ultimately covering discretionary and mandatory programs as well as the revenue side of the budget. In other words, Congress will need to make difficult decisions including defining the role of the federal government in various sectors of our economy and identifying who will benefit from its allocation of resources. Furthermore, given that we have reported that subsidies paid directly to air carriers have not provided an effective transportation solution for passengers in many small communities, these programs may be ones for which Congress may wish to weigh options for reforming EAS and assess SCASDP’s effectiveness once DOT completes its review of the program. In previous work, we have identified options for enhancing EAS and controlling cost increases. These options include targeting subsidized service to more remote communities than is currently the case, improving the matching of capacity with community use, consolidating service to multiple communities into regional airports, and changing the form of federal assistance from carrier subsidies to local grants; all of these options would require legislative changes. Several of these options formed the basis for reforms passed as part of Vision-100. For various reasons these pilot programs have not progressed, so it is not possible to assess their impact. Let me now briefly discuss each option, stressing at the outset that each presents potential negative, as well as positive, impacts. The changes might positively affect the federal government through lowered federal costs, and participating communities through increased passenger traffic at subsidized communities, and enhanced community choice of transportation options. Communities that could be negatively affected might include those in which passengers receive less service or might lose scheduled airline service. One option would be to target subsidized service to more remote communities. This option would mean increasing the highway distance criteria between EAS-eligible communities and the nearest qualifying airport, and expanding the definition of qualifying nearby airports to include small hubs. Currently, to be eligible for EAS-subsidized service, a community must be more than 70 highway miles from the nearest medium- or large-hub airport. In examining EAS communities, we found that, if the distance criterion were increased to 125 highway miles and the qualifying airports were expanded to include small-hub airports with jet service, 55 EAS-subsidized communities would no longer qualify for subsidies—and travelers at those communities would need to drive to the nearby larger airport to access air service. Limiting subsidized service to more remote communities could potentially save federal subsidies. For example, we found that about $24 million annually could be saved if service were terminated at 30 EAS airports that were within 125 miles of medium- or large-hub airports. This estimate assumed that the total subsidies in effect in 2006 at the communities that might lose their eligibility would not be obligated to other communities and that those amounts would not change over time. On the other hand, the passengers who now use subsidized service at such terminated airports would be inconvenienced because of the increased driving required to access air service at the nearest hub airport. In addition, implementing this option could potentially negatively impact the economy of the affected communities. The administration’s reauthorization proposal also would prioritize isolated communities, but in a somewhat different way. Under its approach, if insufficient funding for all communities exists, the communities would be ranked in terms of driving distance to a medium or large hub, with the more isolated communities receiving funding before less isolated communities. This change would protect isolated communities, but could result in subsidies being terminated for communities with relatively low per passenger subsidies. Another option is to better match capacity with community use. Our past analysis of passenger enplanement data indicated that relatively few passengers fly in many EAS markets, and that, on average, most EAS flights operate with aircraft that are largely empty. In 2005, the most recent year for which data are available, 17 EAS airports averaged fewer than 5 passenger boardings per day. To better match capacity with community use, air carriers could reduce unused capacity—either by using smaller aircraft or by reducing the number of flights. Better matching capacity with community use could save federal subsidies. For instance, reducing the number of required daily subsidized departures could save federal subsidies by reducing carrier costs in some locations. Federal subsidies could also be lowered at communities where carriers used smaller—and hence less costly—aircraft. On the other hand, there are a number of potential disadvantages. For example, passenger acceptance is uncertain. Representatives from some communities, such as Beckley, West Virginia, told us that passengers who are already somewhat reluctant to fly on 19-seat turboprops would be even less willing to fly on smaller aircraft. Such negative passenger reaction may cause more people to drive to larger airports—or simply drive to their destinations. Additionally, the loss of some daily departures at certain communities would likely further inconvenience some passengers. Lastly, reduced capacity may have a negative impact on the economy of the affected community. Another option is to consolidate subsidized service at multiple communities into service at regional airports. For example, in 2002 we found that 21 EAS subsidized communities were located within 70 highway miles of at least one other subsidized community. We reported that if subsidized service to each of these communities were regionalized, 10 regional airports could serve those 21 communities. Regionalizing service to some communities could generate federal savings. However, those savings may be marginal, because the total costs to serve a single regional airport may be only slightly less than the cost to serve other neighboring airports. The marginal cost of operating the flight segments to the other airports may be small in relation to the cost of operating the first flight. Another potential positive effect is that passenger levels at the proposed regional airports could grow because the airline(s) would be drawing from a larger geographic area, which could prompt the airline(s) to provide better service (i.e., larger aircraft or more frequent departures). There are also a number of disadvantages to implementing this option. First, some local passengers would be inconvenienced, since they would likely have to drive longer distances to obtain local air service. Moreover, the passenger response to regionalizing local air service is unknown. Passengers faced with driving longer distances may decide that driving to an altogether different airport is worthwhile, if it offers better service and air fares. As with other options, the potential impact of regionalization on the economy of the affected communities is unknown. Regionalizing air service has sometimes proven controversial at the local level, in part because regionalizing air service would require some communities to give up their own local service for the potentially improved service at a less convenient regional facility. Even in situations where one airport is larger and better equipped than others (e.g., where one airport has longer runways, a superior terminal facility, and better safety equipment on site), it is likely to be difficult for the other communities to recognize and accept surrendering their local control and benefits. Some industry officials to whom we spoke indicated regional airports made sense, but selecting the airports would be highly controversial. Another option is to change carrier subsidies into local grants. We have noted that local grants could enable communities to match their transportation needs with individually tailored transportation options to connect them to the national air space system. As we previously discussed, DOT provides grants to help small communities to enhance their air service via SCASDP. Our work on SCASDP identified some positive aspects of the program that could be beneficial for EAS communities. First, for communities to receive a SCASDP grant, they had to develop a proposal that was directed at improving air service locally. In our discussion with some of these communities, it was noted that this approach required them to take a closer look at their air service and better understand the market they serve—a benefit that they did not foresee. In addition, in some cases developing the proposal caused the airport to build a stronger relationship with the community. SCASDP also allows for flexibility in the strategy a local community can choose to improve air service, recognizing that local facts and circumstances affect the chance of a successful outcome. In contrast, EAS has one approach—a subsidy to an air carrier. However, there are also differences between the two programs that make the grant approach problematic for some EAS communities; these differences should be considered. First, because SCASDP grants are provided on a one-time basis, their purpose is to create self-sustaining air service improvements. The grant approach is therefore best applicable where a viable air service market can be developed. This viability could be difficult for EAS communities to achieve because, currently, the service they receive is not profitable unless there is a subsidy. While some EAS communities might be able to transition to self-sustaining air service through use of one of the grants, for some communities this would not be the case. Such communities would need a new grant each year. In addition, the grant approach normally includes a local cash match, which may be difficult for some EAS communities to provide. This approach could systematically eliminate the poorest communities, unless other sources of funds—such as state support or local industry support—could be found for the match, or some provision for economically distressed communities is made. Congress authorized several pilot programs and initiatives designed to improve air service to small communities in Vision-100. These programs and initiatives have not progressed for various reasons. In two cases, communities have not indicated interest in the programs. In one instance Congress decided to prevent DOT from implementing the program. In three cases, DOT officials cited a lack of sufficient funds to implement the programs. Vision-100 authorized the Community Flexibility Pilot Program, which requires the Secretary of Transportation to establish a program for up to 10 communities that agree to forgo their EAS subsidy for 10 years in exchange for a grant twice the amount of one year’s EAS subsidy. The funds may be used to improve airport facilities. DOT has solicited proposals for this program; however, according to a DOT official, no communities expressed any interest in participating. This is likely because no community was willing to risk the loss of EAS subsidies for 10 years in exchange for only 2 years of funding. Likewise, the Alternate Essential Air Service Pilot Program, which allows the Secretary of Transportation to provide assistance directly to a community, rather than paying compensation to the air carrier, elicited no interest from communities. Under the pilot program, communities could provide assistance to air carriers using smaller aircraft, on-demand air taxi service, provide transportation services to and from several EAS communities to a single regional airport or other transportation center, or purchase aircraft. The administration’s draft FAA reauthorization bill would repeal these pilot programs. Another program, the EAS Local Participation Program, allows the Secretary of Transportation to select no more than 10 designated EAS communities within 100 miles, by road, of a small hub (and within the contiguous states) to assume 10 percent of their EAS subsidy costs for a 4- year period. However, Congress has prohibited DOT from obligating or expending any funds to implement this program since Vision-100 was enacted. The administration’s draft FAA reauthorization bill would repeal this pilot program. Three additional initiatives authorized by Vision-100 have not been implemented, in part due to a lack of dedicated funding. Section 402 of Vision-100 allows DOT to adjust carrier compensation to account for significantly increased costs to carriers. For example, an air carrier that has a contract to provide air service can apply for an adjustment due to an increase in its costs. If this increase is granted, the air carrier has increased its revenue without having to competitively bid for the contract. The initiative also provided for a reversal of this adjustment if the costs subsequently declined. DOT officials indicated that a concern they have with this initiative is that an air carrier could win a 2-year contract with a low estimate, and open it again to obtain more funds without facing competition. Also, the Section 410 marketing incentive program, which could provide grants up to $50,000 to EAS communities to develop and execute a marketing plan to increase passenger boardings and usage of airport facilities, was not implemented. DOT officials explained that with the uncertainty of the number of communities that would need EAS subsidies and the cost of those subsidies, using EAS subsidy funding for this marketing incentive program could put the subsidies at risk. One industry group suggested dedicated funding might improve the use of this program. The administration’s draft FAA reauthorization bill would repeal this marketing incentive program. Finally, Section 411 of Vision-100 authorized the creation of a National Commission on Small Community Air Service to recommend how to improve commercial air service to small communities and the ability of small communities to retain and enhance existing air service. This provision was likewise not implemented because funds were not specifically appropriated, according to DOT officials. Such a commission may have been helpful in developing approaches to deal with difficult policy decisions, such as regionalizing air service. DOT plans to host a symposium to bring industry experts together to identify regulatory barriers and develop ideas for improving air service to small communities which may be a step in the right direction. DOT officials acknowledge that this symposium should be held soon to inform reauthorization deliberations. In 2005, we recommended that DOT examine the effectiveness of SCASDP when more projects are complete; and the DOT IG recently began this evaluation. Since our report, an additional 48 grants have been completed and DOT will be able to examine the results from these completed grants. Such an evaluation should provide DOT and Congress with additional information about not only whether additional or improved air service was obtained, but whether it continued after the grant support ended. In addition, our prior work on air service to small communities found that once financial incentives are removed, additional air service may be difficult to maintain. This evaluation should provide a clearer and more complete picture of the value of this program. Any improved service achieved from this program could then be weighed against the cost to achieve those gains. In conducting this evaluation, DOT could find that certain strategies the communities used were more effective than others. For example, during our work, we found some opposing views on the usefulness of certain strategies for attracting improved service. DOT officials could use the results of the DOT IG’s evaluation to identify strategies that have been effective in starting self-sustaining improvements in air service and they could share this information with other small community airports and, perhaps, consider such factors in its grant award process. In addition, DOT might find some best practices and could develop some lessons learned from which all small community airports could benefit. For example, one airport used a unique approach of assuming airline ground operations such as baggage handling and staffing ticket counters. This approach served to maintain airline service of one airline and in attracting additional service. In addition, the SCASDP program has shown that there is a strong demand on the part of small community airports to improve enplanements through various marketing strategies. Successful marketing efforts could increase enplanements, thus driving down the per passenger subsidy. Sharing information on approaches like these that worked (and approaches that did not) may help other small communities improve their air service, perhaps even without federal assistance. In conclusion, Mr. Chairman, Congress is faced with many difficult choices as it tries to help improve air service to small communities, especially given the fiscal challenges the nation faces. Regarding EAS, I think it is important to recognize that for many of the communities, air service is not—and might never be—commercially viable and there are limited alternative transportation means for nearby residents to connect to the national air transportation system. In these cases, continued subsidies will be needed to maintain that capability. In some other cases, current EAS communities are within reasonable driving distances to alternative airports that can provide that connection to the air system. It will be Congress’ weighing of priorities that will ultimately decide whether this service will continue or whether other, less costly options will be pursued. In looking at SCASDP, I would emphasize that we have seen some instances in which the grant funds provided additional service, and some in which the funds did not work. Enough experience has now been gained with this program for a full assessment, and with that information the Congress will be in a position to determine if the air service gains that are made are worth the overall cost of the program. I would be pleased to answer any questions that you or other Members of the Subcommittee may have at this time. For further information on this testimony, please contact Dr. Gerald L. Dillingham at (202) 512-2834 or [email protected]. Individuals making key contributions to this testimony and related work include Robert Ciszewski, Catherine Colwell, Jessica Evans, Colin Fallon, Dave Hooper, Alex Lawrence, Bonnie Pignatiello Leer, and Maureen Luna-Long. Airport Finance: Preliminary Analysis Indicates Proposed Changes in the Airport Improvement Program May Not Resolve Funding Needs for Smaller Airports. GAO-07-617T Washington, D.C.: March 28, 2007. Commercial Aviation: Programs and Options for the Federal Approach to Providing and Improving Air Service to Small Communities. GAO-06- 398T Washington, D.C.: September 14, 2006. Airline Deregulation: Reregulating the Airline Industry Would Reverse Consumer Benefits and Not Save Airline Pensions. GAO-06-630 Washington, D.C.: June 9, 2006. Commercial Aviation: Initial Small Community Air Service Development Projects Have Achieved Mixed Results. GAO-06-21 Washington, D.C: November 30, 2005 Commercial Aviation: Survey of Small Community Air Service Grantees and Applicants. GAO-06-101SP. Washington, D.C.: November 30, 2005 Commercial Aviation: Bankruptcy and Pension Problems Are Symptoms of Underlying Structural Issues. GAO-05-945 Washington, D.C.: September 30, 2005 Commercial Aviation: Legacy Airlines Must Further Reduce Costs to Restore Profitability. GAO-04-836 Washington, D.C.: August 11, 2004 Commercial Aviation: Issues Regarding Federal Assistance for Enhancing Air Service to Small Communities. GAO-03-540T. Washington, D.C.: March 11, 2003 Federal Aviation Administration: Reauthorization Provides Opportunities to Address Key Agency Challenges. GAO-03-653T. Washington, D.C.: April l0, 2003 Commercial Aviation: Factors Affecting Efforts to Improve Air Service at Small Community Airports. GAO-03-330 Washington, D.C.: January 17, 2003 Commercial Aviation: Financial Condition and Industry Responses Affect Competition. GAO-03-171T. Washington, D.C.: October 2, 2002. Options to Enhance the Long-term Viability of the Essential Air Service Program. GAO-02-997R. Washington, D.C.: August 30, 2002. Commercial Aviation: Air Service Trends at Small Communities Since October 2000. GAO-02-432. Washington, D.C.: March 29, 2002. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Congress established two key programs to help support air service to small communities--the Essential Air Service (EAS) providing about $100 million in subsidies per year and the Small Community Air Service Development Program (SCASDP) that provides about $20 million per year in grants. As part of its reauthorization of the Federal Aviation Administration (FAA), the Congress is examining the status and outcomes of these programs. This testimony discusses (1) the history and challenges of the EAS program, (2) the implementation and outcomes of the SCASDP and (3) options for reforming EAS and SCASDP. The testimony is based on previous GAO reports, interviews with Department of Transportation officials and industry representatives as well as program updates. EAS subsidies support air service to many small communities that would likely not have service if EAS subsidies are discontinued. Since 1997, funding for EAS has increased from $25.9 million in 1997 to $109.4 million in 2007 and the number of communities has generally increased. The federal government is spending a median of about $98 per passenger, with subsidies ranging from about $13 to $677 per passenger. Concerns exist about the costs of the program, particularly given the federal government's long-term structural fiscal imbalance. In addition, according to industry representatives, the number of air carriers flying aircraft suitable for EAS communities may decrease, raising concerns about the availability of appropriate aircraft to provide small community air service in the future. SCASDP grantees have used their grants to pursue a variety of goals and have used a variety of strategies, including marketing and revenue guarantees, to improve air service. Our analysis of the 23 grants completed by October 1, 2005, found that air service was sustained after the grant expired in a little less than half of the projects. Finally, although the program has seen some success, the number of applications for SCASDP grants has declined--from 179 in 2002 to 75 in 2006. As we have reported, options for reforming EAS, such as consolidating service into regional airports might make the program more efficient, but also could reduce service to some communities. Further, Congress may be able to use some "lessons learned" from marketing and other successful SCASDP strategies that may help it make the current programs more effective.
Changing national security needs and DOD’s recognition that its base structure was larger than required led to a decision to close numerous military bases around the country. Consequently, the Congress enacted base realignment and closure (BRAC) legislation that instituted base closure rounds in 1988, 1991, 1993, and 1995. The authority under this legislation has expired. Property disposals resulting from base closures and realignments are governed by various base closure and realignment laws and other laws relating to the disposal of surplus government property, homeless assistance, and environmental concerns. Once property is no longer required by a federal agency to discharge its responsibilities, the property is declared excess to that agency and then offered to other federal agencies to satisfy their requirements. If no other agency has a requirement for the property, it is declared surplus to the federal government. At that point, the Federal Property and Administrative Services Act of 1949 authorizes disposal through a variety of means such as public or negotiated sale and transfers to states and local governments for public benefit purposes such as education, public health, recreation, airport, wildlife conservation, and historic monuments. In addition, the base closure legislation authorizes surplus real property from closing bases to be transferred to local redevelopment authorities under economic development conveyances for economic development and job creation purposes. To use this authority, however, requires a showing that economic development and job creation cannot be accomplished under established sales or public benefit transfers. As shown in figure 1, local reuse authorities generally seek surplus property under one of the public benefit transfer authorities first because these can be no-cost acquisitions, then through economic development conveyances because these can be no-cost or no-initial cost acquisitions, and lastly through negotiated sale because they can negotiate the terms and do not have to compete with other interested parties. Any surplus property that remains is available for sale to the general public. (See app. II for a more detailed discussion of the laws and regulations affecting the base closure process.) At the beginning of the base closure process, DOD expected that land sales would help pay for the costs of closing bases. However, given national policy changes and recent legislation that emphasize assisting communities that are losing bases, DOD no longer expects significant revenue from land sales. The information contained in this report focuses on the September 1995 status of property disposal plans at 23 of 30 major installations recommended for closure by the 1993 closure commission unless more recent data was provided. (See fig. 2.) The 23 bases were selected because DOD considered these bases the major closures and assigned on-site base transition coordinators to them as of January 1994. Although we previously reported on the status of major base closings in the 1988 and 1991 rounds, this report provides information on those rounds to give an overall perspective on implementation of the closure recommendations. Opportunities for private parties to purchase surplus real property at closing bases, while not precluded, are limited by the disposal process. DOD, federal, state, and local interests are considered before surplus property is made available for public sale to private parties. Accordingly, DOD looks to a community’s reuse plan and gives preference to its wishes when making disposal decisions. Land sales for all BRAC closures totaled $179.2 million as of March 1996. Two property sales have been completed from the 1993 round, one for $1.1 million for 111 family housing units at Niagara Falls Naval Facility, New York, and the other for $428,000 for 2.2 acres of land at Homestead Air Force Base, Florida. A community’s reuse plan recommends how surplus base property should be developed, and the military services generally base their disposal decisions on these plans. Developing reuse plans and developing and implementing service disposal plans can be a lengthy process. In some cases, this means that readily marketable properties may (1) deteriorate as they sit idle; (2) decline in value as negotiations drag on should a sale ever occur; and (3) drain resources from the services, as activities such as protection and maintenance are continued. As we reported earlier, only 4 percent of the surplus property was planned for public sale in the 1988 and 1991 closure rounds. In 1993, the amount of property planned for market sale dropped to about 1 percent. Less than half of that property is planned for public sale. The low percentage of land sold to the public is a result of the disposal process, which allows communities to plan the reuse of most base property. Communities are requesting surplus property predominately through no-cost public benefit transfers or economic development conveyances. The economic development conveyance was established by law in response to President Clinton’s Five Point Program to revitalize base closure communities, announced in July 1993. Section 2903 of title XXIX of the National Defense Authorization Act for Fiscal Year 1994 established the basis for the economic development conveyance. The new mechanism is a special tool created to enable communities to act as master developers by obtaining property under more flexible finance and payment terms than previously existed. For example, a community can request property at less than fair market value if it can show the discount is needed for economic development and job creation. Regulations promulgated by DOD to implement title XXIX also give local communities the authority to recommend the land use of surplus property, taking into consideration feasible reuse alternatives and notices of interest from homeless assistance providers. The services consider these reuse plans when making their disposal decisions. However, they are not obligated to follow the reuse plans, nor is a community granted the authority to make disposal decisions. The disposal of property by public benefit transfer or economic development conveyance rather than sale reduces the immediate economic return to the government. For example, the golf course at Myrtle Beach Air Force Base is to be conveyed through a public benefit transfer to the city of Myrtle Beach. By doing so, the government relinquished the opportunity to sell the property for $3.5 million to a private developer who intended to continue to use it as a public golf course. Surplus property may deteriorate and lose value as it sits idle. DOD can avoid such results by disposing of surplus property as promptly as possible. However, before any sale can occur, DOD must consult with the state Governor and the heads of local governments to consider any plan for the use of the property by any concerned local government. The disposal process can be time consuming and the services have let property sit idle for several years while services and communities developed land use plans or negotiated a purchase. During this time, properties have deteriorated and their value declined. That decline represents lost revenue should a sale ever occur. An example of housing that has deteriorated for more than 2 years, while the Air Force and the local reuse authority negotiated a sale, is Myrtle Beach Air Force Base. During negotiations, two appraisals were conducted. The property’s value decreased significantly from the first appraisal to the second. According to an Air Force official, a major cause for the decrease in the property’s appraised value was its deterioration. Family housing at the base is shown in figures 3 and 4. In addition, the director of the local reuse authority cited the need for significant upgrades to the houses to make them habitable. Deterioration also occurred to 1,271 family housing units at Mather Air Force Base. The housing has been vacant for over 2 years as the Air Force and the local reuse authority negotiate the terms of the sale. During this time, a number of units were damaged by inclement weather, vandalism, and thefts. In December 1995, a major storm felled 40 trees in the housing area damaging roofs and flooring. Since May 1995, 76 air-conditioning units have been stolen from the housing area. As an Air Force official noted, one of the reasons for the appraisal value’s decline from the first to the last appraisal was the property’s deterioration. The various forms of deterioration of family housing at the base are shown in figures 5, 6, 7, and 8. In instances where surplus property is sold through a negotiated sale, as opposed to a public sale, the federal government may not be getting the highest monetary return possible for the surplus real property. When communities cannot obtain property through either a public benefit transfer or an economic development conveyance, they often seek the property through a negotiated sale, maintaining that the property will be used to fulfill a public use such as affordable housing. According to federal regulation, negotiated sales of surplus property to state and local governments for a public benefit use are to be based on estimated fair market value. Even so, the federal government may lose revenue if the property is resold at a price above what the state or local government paid for it. To avoid this loss, the regulation requires that the conveyance documents resulting from the negotiated sales to public agencies contain an excess profits clause. This clause entitles the federal government to receive all subsequent sales proceeds that exceed the purchase price and other specified costs allowed to the state or local government if the property is sold within a specified time period. According to the Director of the Air Force Base Conversion Agency, the specified period is based on the time that it will take the housing to be absorbed into the market. In the case of the Mather housing, the local reuse authority states it will take about 10 years for the property to be absorbed, while pursuing a 3-year excess profits clause in the sales contract. In January 1995, both the General Services Administration and the Air Force concurred that the interest of the federal government would not be protected by a 3-year excess profits clause. This issue remains unresolved. The government also may lose revenue when the estimated fair market value of surplus property declines during protracted negotiations. At Myrtle Beach, the Air Force has been negotiating the sale of 777 units of family housing for over 24 months, although a private party offered to purchase them even before they were declared surplus. The reuse authority has offered substantially less than the $11.1 million offer once made by a private party. In both cases, the property’s use would remain the same—housing. Similarly, at Mather Air Force Base, negotiations between the Air Force and the local reuse authority for 1,271 family housing units have been ongoing since 1993. If the Air Force accepts the local reuse authority’s offer, it will be accepting significantly less revenue for the property than at least one private party was willing to pay. Protection and maintenance costs continue to accrue as property waits to be conveyed or sold. The longer the services hold on to property, the longer they incur the costs. While the services are not required to maintain property at their initial levels indefinitely, there is an incentive to protect and maintain it because the property forms the basis of a community’s reuse plan. The services must provide for the protection and maintenance of surplus property until its disposal. DOD’s implementation manual states that the initial maintenance level at a base will normally be sustained for a maximum of 1 year after operational closure or 180 days after a formal disposal decision is made. These limits can be extended if a local reuse authority is actively implementing a reuse plan and the initial or adjusted levels are justified. According to an Air Force official, the only two instances (levels five and six) in which the services do not incur costs to maintain and protect property are (1) when property is leased and the tenant provides for the protection and maintenance and (2) when property is abandoned. Neither the Army nor the Navy compiled information on the average cost to protect and maintain their closed bases. An Air Force official stated its average annual cost was about $2.7 million a base. As discussed previously, property deteriorates when it sits vacant for extended periods of time, which decreases its value. DOD could preserve the value of facilities and reduce protection and maintenance costs by (1) renting vacated property to the limited degree necessary to preserve the property and (2) setting time limits on negotiations over the terms of sale. The renting approach was successfully used at Fort Ord, California. Through the initiative of local base officials, government civilian families were allowed to rent a limited number of the nearly 1,200 family housing units in order to keep a presence in 3 housing tracts. Fort Ord officials, using the Corps of Engineers’ estimates of fair market rental value, entered into rental agreements with the families. The families were assigned only to ground floor units of every other building so that anyone in upstairs units would be noticed and reported to security. According to former installation officials, the rent more than offset the protection and maintenance costs for the entire 1,200 units, and theft, vandalism, fire, and other forms of deterioration were limited to a minor theft and a few instances of graffiti that was quickly removed by housing officials. Many people voluntarily maintained the lawns of adjacent empty buildings, an unexpected benefit. The program was considered a success, and it is being continued by the university that acquired ownership. The services could also preserve property values and reduce protection and maintenance costs by limiting the amount of time for negotiating the terms of either an economic development conveyance or a negotiated sale to a state or local jurisdiction. When disposing of surplus real property at closed military bases, the services are required to follow the laws and regulations that establish the terms under which the sale of surplus property is conducted. While the regulations provide direction on how and when sales can occur, they do not establish how long negotiations may continue. Communities may prolong the negotiation period in the hopes of obtaining more favorable terms, but they end up with property in much poorer condition. Negotiations unconstrained by time limits work to neither party’s advantage. Property deterioration during the course of negotiations causes a loss of value to the government, and, if negotiations are successful, receipt of property by the local government that is less expensive but probably in poorer condition than when negotiations started. If time for negotiations was limited to a set period, such as 9 months (the amount of time an appraisal is valid), then property values could be more easily preserved, protection and maintenance costs would be limited, and only one appraisal would be required for the negotiations. Current plans call for the federal government to retain about 16 percent of the land at the 23 closing military bases to satisfy agency requirements or to comply with decisions made by the BRAC Commission or by legislation. This is a decrease from the 58 percent retained in the 1988 and 1991 rounds. About 84 percent of the property is to be declared surplus to the federal government’s needs and made available for conversion to community reuse—double the percentage made available in the previous two rounds. The bulk of this land (68 percent) is expected to be conveyed to communities under either no cost public benefit conveyance authorities or under the economic development conveyance authority. Communities’ plans for these properties involve a variety of public benefit and economic development uses; some communities expect base reuse to result in more civilian jobs than previously existed at the bases. As discussed earlier and shown in figure 9, only about 1 percent is planned for market sale. Communities have still not determined the reuse of 15 percent of the land. Of the 16 percent of the property to be retained by the government, 10 percent will be retained by DOD to support Reserve, National Guard, and other active duty missions. Frequently cited uses include Defense Finance and Accounting Service centers and military housing, often to support other neighboring military operations that are remaining open. About two-thirds of the land is being retained in accordance with BRAC recommendations. For example, at the Glenview and Barbers Point Naval Air Stations, the 1993 Commission recommended that 1,202 acres of housing be retained to support other nearby bases. DOD will transfer about 4 percent of the land to the Department of Interior’s Fish and Wildlife Service to be used as wildlife refuges and wetlands. DOD will also transfer about 1 percent of the land to other federal agencies for such uses as a national park, job core center, correctional facility, and finance center. (See app. III for a summary of federal uses.) A primary reason that more land was retained for federal uses during the first two closures than in BRAC 1993 was that a larger proportion of the land was contaminated with unexploded ordnance. About half of the land retained by the federal government during the earlier closures will be used as wildlife refuges by the Fish and Wildlife Service or the Bureau of Land Management, in part, to avoid the cost of cleaning up land contaminated with unexploded ordnance. This problem was largely absent in the BRAC 1993 bases. However, even subtracting this land from the total available for disposal, the percentage of uncontaminated land being retained by the federal government fell substantially, from 29 to only 16 percent during the BRAC 1993 round. Communities plan to use several different means of conveyance for the 84 percent of base property available for community reuse during BRAC 1993. Although the method of conveyance and disposition for about 15 percent of base property remains undetermined, communities are planning to request 32.5 percent under various public benefit conveyances. As with the previous two rounds, the largest public benefit use is for commercial airport conversions, which will total about 20.1 percent under current plans. About 7.2 percent is planned for park and recreation use, the second largest public benefit use. Plans call for transferring another 5.2 percent of the property to such public benefit uses as homeless assistance, education, and a state prison. Communities are also planning to request 35.7 percent of base property under economic development conveyances, compared with only 12 percent of property during the first two rounds. Final implementing rules for such applications, published in July 1995, allow communities to acquire surplus federal property at little or no initial cost provided that development of the property results in increased jobs. Thus, communities can take a long-range approach to planning land use. During our review, communities were working on or initiating the studies and business plans necessary to apply for economic development conveyances for any base property remaining after federal and local public benefit screening. Initial indications are, however, that a number will be applying for transfers at low or no initial cost. Finally, DOD plans to sell about 1 percent (less than half to private parties) of the property. This compares with 4 percent during the previous two rounds. Table 1 provides a summary of the disposal plans for each of the 23 bases we reviewed. Although BRAC 1993 bases are not as far along in the conversion process as bases we reviewed from the previous two rounds, progress is being made in converting properties to civilian uses. On closing bases, communities are planning industrial and office complexes, parks and other recreational facilities, residential housing, and correctional facilities. According to DOD’s Office of Economic Adjustment, the 1988, 1991, and 1993 closure rounds resulted in the loss of 88,433 civilian jobs. On the other hand, the conversion of base property has resulted in the creation of 18,335 new jobs for about a 21-percent recovery rate. (See app. IV for a summary of jobs created.) At some bases, the number of new jobs resulting from redevelopment are eventually expected to exceed preclosure levels. The following are some examples of reuse efforts. At Glenview Naval Air Station, Illinois, the community’s plan includes residences, offices and warehouses, light industry, a commuter rail station, open space, and the preservation of the existing golf course. The plan is projected to create over 5,600 jobs, about 14 times the number of civilian jobs at the former base. At the Charleston Naval Complex, South Carolina, the community’s plan includes continued private shipyard activities and other maritime industrial and cargo-related uses, as well as waterfront parks. Two maritime industry firms have already begun operations at the former base. Including public sector jobs on federally retained land (at the Postal Service and Defense Finance and Accounting Service), a local reuse authority official estimated that about 4,900 jobs would be created over the next 5 years. The reuse plan projects that redevelopment would create 9,100 to 11,600 jobs over the next 20 years, which is significantly greater than the complex’s former civilian employment. The community at the over 20,000-acre Cecil Field Naval Air Station, Florida, the largest base closed by BRAC 1993, is planning an industrial and manufacturing center, recreation facilities, open space, a new state correctional facility, and agricultural areas, including 1,000 acres of forest and wetlands that will be used for tree farming. Once the base reuse plan is fully implemented, civilian employment is expected to exceed 5,000, or more than 10 times the level at the former base. The reuse plan at the Naval Training Center Orlando, Florida, provides for more than 3,200 residential units and more than 5 million square feet of new and renovated office and retail space for the Center’s four properties. Twelve major tenants, some federal, have already been identified, accounting for about 1,700 new jobs, compared with 750 civilians employed at the former base. Employment is projected to reach about 15,000 within 10 years. The maximum time bases have to close is 6 years, although many close earlier. During the time that bases are closing, individual facilities sometimes become available for lease or license to the private sector. Such interim leases and licenses can result in increased job opportunities and generate needed revenue, which is then generally used for the care and maintenance of base facilities. Productive use of valuable assets can therefore take place while reuse planning continues for a more permanent disposition of property. Several communities have been successful in leasing or licensing base property, as the following examples show: At the Mare Island Naval Shipyard, California, two licenses and two interim leases have been signed for base property. The licenses are for the use of base facilities by a motion picture and local railroad company. The leases are for the use of a structural shop by an industrial firm and for the base golf course. To date, about 148,000 square feet of buildings and 100 acres have been licensed or leased creating about 250 jobs. The local reuse authority assumed responsibility for protecting and maintaining the leased property, thereby saving the Navy these costs. Two interim leases have been signed at the Dayton Defense Electronics Supply Center in Ohio. One lease is with a local manufacturing company and the other is with a county board involved with health issues. When both leases are fully operational, about 120 jobs are expected. To prepare for operations, one of the lessees has invested $800,000 to renovate and upgrade 72,000 square feet of office space. Lease revenues are expected to be used to protect and maintain these properties. An interim lease was signed in November 1995 at the Alameda Naval Air Station, California, by a consortium of 120 California businesses specializing in developing new transportation technologies. A matching federal grant of $2.9 million will be used to help start up operations in a vacant 65,000-square foot hangar. The new electric car chassis manufacturing facility is expected to generate an initial 50 jobs, with the potential for several hundred more. Treasure Island Naval Station, California, licensed properties to two movie production companies for 6 months each. A large hangar on the island was used to build sound stages and movie sets. Rental proceeds are being used to protect and maintain the properties. Recent concerns over seismic safety have halted licensing activity for the time being. A military base often represents a major employment center and provides significant economic stimulus to a local economy; thus, a base closure can cause economic distress. To support dislocated workers and help communities plan and implement their redevelopment objectives, the federal government is providing assistance through numerous programs. Under major programs, federal agencies have provided about $560 million to communities at the 60 BRAC bases we reviewed that were selected for closure in 1988, 1991, and 1993. In total, federal economic assistance related to fiscal years 1988 through 1995 reached about $780 million for the three rounds. Grants have been awarded to communities for activities such as reuse planning and job training, as well as infrastructure improvements and community economic development. (See app. V for a summary of the federal assistance provided to each community.) Among the major sources of assistance are DOD’s Office of Economic Adjustment, the Department of Commerce’s Economic Development Administration, the Department of Labor, and the Federal Aviation Administration. Additionally, there are other federal, state, and local resources available to assist with the retraining of workers and the redevelopment of the closed bases. The Federal Aviation Administration has awarded the most assistance, providing $182 million for airport planning and development of construction projects and public airports. The Economic Development Administration has awarded $154 million to stimulate commercial and industrial growth and to protect and generate jobs in the affected areas. The Office of Economic Adjustment has awarded $120 million to help communities plan the reuse of closed military bases and the Department of Labor has awarded $103 million to help communities retrain workers adversely affected by closures. We recommend that the Secretary of Defense establish reasonable time frames for concluding negotiated sales of surplus real property and when practical, rent unoccupied, surplus housing and other facilities as a means of preserving property pending final disposition. In commenting on a draft of this report, DOD stated that it partially concurred with the report, partially concurred with the first recommendation and nonconcurred with the second recommendation. DOD said that the report addressed widely differing bases and local circumstances and attempted to draw generic conclusions and solutions from the sample. DOD stated that closing bases vary greatly in terms of total land area, building and utility system condition, and the amount of environmental cleanup necessary to allow interim civilian use and ultimate disposal of property. It said that it is rare that the property lies in a single political jurisdiction and therefore base reuse planning was an extrordinary intergovernmental consensus building challenge. With regard to our recommendation that reasonable time frames be established for concluding negotiated sales of surplus property, DOD partially concurred, stating that placing arbitrary limitations on the time frame for negotiations of sales and economic development conveyances was probably not practical, but it would look at establishing time frames where circumstances permit. Further, DOD said that a negotiated sale or economic development conveyance is made for a public purpose, principally economic redevelopment and new job creation, thereby allowing local redevelopment authorities better control in the selection and timing of job creating activities, rather than leaving them to the exigencies of the marketplace. DOD did not agree with our recommendation to rent unoccupied housing, stating that while the Fort Ord situation worked well, the recommendation had only limited utility. DOD believed that it inferred that there is a ready market for military facilities, which is not normally the case. Moreover, it said that placing large quanties of space up for lease could easily undercut local businesses and flood local markets, particularly in less urban locations. DOD also said that the recommendation ignored the essential ingredient of economic development conveyance disposals—the ability to use some of the military assets for immediate revenue streams to offset the up front redevelopment costs. We agree with DOD that every base is unique and should be treated as such. However, there are lessons learned that can be drawn from the overall base closure experience that can be tailored for use in unique situations. Our recommendations were made in that context. We believe establishing time frames for negotiated sales is a useful management tool to move negotiations along and measure progress, while at the same time leaving flexibility should it be needed. For example, if the creation of jobs and quickly and efficiently disposing of property is a primary goal, then it seems to us that placing reasonable time frames for negotiations can help to move the process along and is appropriate. We recognize that renting unoccupied housing will not work at all bases and have modified the recommendation to do that where practical, such as in the case of Fort Ord. In addition, the intent of the recommendation is for the government to rent the property until decisions are made on how to dispose of the property. Therefore, if the local reuse authority obtains the property, there is already a revenue stream in place, which was the case at Fort Ord. DOD’s comments are presented in their entirety in appendix I. We collected information on 23 of the 30 major installations, containing about 54,000 acres, closed by the 1993 BRAC Commission. These bases were selected because they were considered major closures by the BRAC Commission and were assigned a base transition coordinator by DOD. Where more than one closure activity was located on the same installation, we combined the activities and reported on the installation as a whole. To determine if private enterprises are being excluded from buying surplus property, we reviewed the statutes for disposing of property and documents detailing the interest by the private enterprise. We also interviewed DOD officials, base transition coordinators, community representatives, and private developers. To determine the amount and type of federal assistance provided to the BRAC 1988, 1991, and 1993 base closure communities, we obtained federal assistance information from the Federal Aviation Administration, the Economic Development Administration, the Department of Labor, and the Office of Economic Adjustment. To determine the current plans for reusing property at closing military installations, including any progress and/or problems in achieving those plans, we reviewed community reuse plans, when available, and interviewed base transition coordinators, community representatives, and DOD officials. When community reuse plans were not available, we identified the most likely reuses. When it was not possible to identify the most likely reuse of property, we categorized the property as undetermined. Our review was performed between July 1995 and March 1996 in accordance with generally accepted government auditing standards. Unless you publicly announce its contents earlier, we plan no further distribution of this report until 10 days after its issue date. At that time, we will send copies to the Secretaries of Defense, the Army, the Navy, and the Air Force and to the Administrator of General Services. We will also make copies available to others upon request. Please contact me at (202) 512-8412 if you or your staff have any questions concerning this report. Major contributors to this report are listed in appendix VI. The following are GAO’s comments on the Department of Defense’s (DOD) letter dated June 14, 1996. 1. We made a factual presentation of what was occurring and did not speculate on community motivations. As we noted in our report, communities are planning to request about 33 percent of the properties under various public benefit conveyance authorities and another 36 percent under the economic development conveyance authority, neither of which is property that expands the property tax base. Our review of the communities’ plans shows less than 1 percent of the properties would be added to the property tax base. 2. We believe property can be effectively used to create jobs and reduce the military services’ protection and maintenance costs even before community plans are finished or military missions have ceased. The Department of Defense Base Reuse Implementation Manual describes leasing for reuse as one of the most important tools for initiating rapid economic recovery and job creation while reducing the military’s protection and maintenance costs. The manual also states that leasing for reuse can be done if it doesn’t interfere with the military mission. The Fort Ord housing discussed in the report is one example. Examples of successful reuse prior to closure are discussed in our report. These include the leasing of facilities at Alameda Naval Air Station, Treasure Island Naval Station, Mare Island Naval Shipyard, and the Dayton Defense Electronics Supply Center and the renting of family housing at Fort Ord. 3. We believe our recommendation is practical for bases in urban areas. Fort Ord had more housing units than any other BRAC closure and it was located in a small urban community. Yet, the Army was successful in renting enough of the housing to pay for the protection and maintenance costs for all of the vacated housing. However, in rural areas this approach may not be practical and we revised our final report to reflect this point. The principal legal authorities governing base closure and reuse are the (1) 1988 Defense Authorization Amendments and Base Closure and Realignment Act and the Defense Base Closure and Realignment Act of 1990; (2) Title XXIX, National Defense Authorization Act for Fiscal Year 1994; (3) Federal Property and Administrative Services Act of 1949; (4) National Environmental Policy Act of 1969; (5) Comprehensive Environmental Response, Compensation, and Liability Act of 1980; (6) 1987 Stewart B. McKinney Homeless Assistance Act; and (7) Base Closure Community Redevelopment and Homeless Assistance Act of 1994. Since the initial round of closures was announced, the disposal process has undergone a number of changes to enhance the possibility that reuse and economic development will result from the closed bases. The 1988 Defense Authorization Amendments and Base Closure and Realignment Act and the Defense Base Closure and Realignment Act of 1990, collectively referred to as the base realignment and closure acts or BRAC acts, provide the Secretary of Defense with authority to close military bases and dispose of excess property. In July 1993, the President announced a five-part program to speed economic recovery at communities where military bases are slated to close. Title XXIX of the National Defense Authorization Act for Fiscal Year 1994 amended the BRAC acts to enable local redevelopment authorities to receive government property at no initial cost if the property is used for economic development and job creation. In July 1995, DOD issued a final rule impacting the disposal process. The rule implements the act by establishing the process for conveying property at estimated fair market value or less to facilitate property transfers and foster economic recovery in the affected community (referred to as economic development conveyances). The Federal Property and Administrative Services Act of 1949 establishes the process of disposing of property deemed excess to an agency’s needs or surplus to the government’s requirements. In the case of base closures, property considered excess to the needs of one military service may be requested by the other military services and federal agencies to satisfy program requirements. If no government requirements exist, the property is declared surplus to the government and is available for conveyance at no cost through various public benefit discount programs, negotiated sale at fair market value to state governments or their instrumentalities, public sale at fair market value, or conveyed to communities at fair market value or less for economic development and job creation. The National Environmental Policy Act of 1969 requires that the federal government assess the potential environmental impacts of its proposed action to dispose of surplus federal property prior to making a final disposal decision. Under the Comprehensive Environmental Response, Compensation, and Liability Act of 1980, DOD is responsible for environmental restoration on bases recommended for closure or realignment. The level of cleanup required by the act is dependent upon the future use of the site. In fact, surplus property cannot be deeded until it has been determined that the property is environmentally suitable for its intended purposes. However, section 2908, title XXIX, National Defense Authorization Act for Fiscal Year 1994 makes it possible for the services to transfer a parcel of land exchange for cleanup at a closing base. The Stewart B. McKinney Homeless Assistance Act provides homeless service providers access to surplus property. Initially, homeless providers were given priority over local communities for requests of excess property. However, the Base Closure Community Redevelopment and Homeless Assistance Act of 1994 amended the BRAC acts and the McKinney Act, in essence eliminating the priority that homeless providers had. As a result of the amendments, homeless providers’ needs are now considered in concert with the community’s reuse planning process. To support both the communities and the services in their efforts to expedite the disposal and reuse of closing military bases, DOD issued two reference manuals. In May 1995, DOD released the Community Guide to Base Reuse as a resource for communities. The guide describes the base closure and reuse processes; catalogs the many assistance programs available to communities, which are administered by DOD and others; and summarizes lessons learned from other communities that have been affected by base closures and realignments. In July 1995, DOD issued the Base Reuse Implementation Manual to provide common guidance for the service implementors of the Base Closure Assistance Act of 1993 and the Base Closure Community Redevelopment and Homeless Assistance Act of 1994. Charleston Naval Station and Naval Shipyard, S.C. Dayton Defense Electronics Supply Center, Ohio Newark Air Force Base, Ohio Orlando Naval Training Center and Naval Hospital, Fla. Barbers Point Naval Air Station, Hawaii Mare Island Naval Shipyard, Calif. Orlando Naval Training Center and Naval Hospital, Fla. New York Naval Station, N.Y. Charleston Naval Station and Naval Shipyard, S.C. Barbers Point Naval Air Station, Hawaii Orlando Naval Training Center and Naval Hospital, Fla. Barbers Point Naval Air Station, Hawaii 1,109 acres of housing to support other nearby bases (BRAC recommendation) Cecil Field Naval Air Station, Fla. 2,564 acres, landing field to support nearby base (BRAC recommendation) Glenview Naval Air Station, Ill. 93 acres of housing to support nearby base (BRAC recommendation) Orlando Naval Training Center and Naval Hospital, Fla. San Diego Naval Training Center, Calif. Charleston Naval Station and Naval Shipyard, S.C. El Toro Marine Corps Air Station, Calif. Alameda Naval Air Station, and Naval Aviation Depot, Calif. El Toro Marine Corps Air Station, Calif. 1,084 acres for wildlife refuge (continued) San Diego Naval Training Center, Calif. Mare Island Naval Shipyard, Calif. Staten Island Naval Station, N.Y. 174 acres (163 acres legally required) Agana Naval Air Station, Guam Barbers Point Naval Air Station, Hawaii Staten Island Naval Station, N.Y. Treasure Island Naval Station, Calif. Charleston Naval Station and Naval Shipyard, S.C. Agana Naval Air Station, Guam Agana Naval Air Station, Guam Charleston Naval Station and Naval Shipyard, S.C. 5 acres (legislative requirement) Barbers Point Naval Air Station, Hawaii Orlando Naval Training Center and Naval Hospital, Fla. El Toro Marine Corps Air Station, Calif. Staten Island Naval Station, N.Y. San Diego Naval Training Center, Calif. Orlando Naval Training Center, and Naval Hospital, Fla. Treasure Island Naval Station, Calif. Charleston Naval Station and Naval Shipyard, S.C. 10 acres for finance center (legislative requirement) Mare Island Naval Shipyard, Calif. Recovery (Percent) Alameda Naval Air Station and Naval Aviation Depot Charleston Naval Station and Naval Shipyard Davisville Naval Construction Battalion Center Dayton Defense Electronics Support Center Long Beach Naval Station and Naval Hospital (continued) Recovery (Percent) (continued) GAO has issued the following products related to military base closures and realignments: Military Bases: Closure and Realignment Savings Are Significant, But Not Easily Quantified (GAO/NSIAD-96-67, Apr. 8, 1996). Closing Maintenance Depots: Savings, Workload, and Redistribution Issues (GAO/NSIAD-96-29, Mar. 4, 1996). Military Bases: Case Studies on Selected Bases Closed in 1988 and 1991 (GAO/NSIAD-95-139, Aug. 15, 1995). Military Base Closures: Analysis of DOD’s Process and Recommendations for 1995 (GAO/T-NSIAD-95-132, Apr. 17, 1995). Military Bases: Analysis of DOD’s 1995 Process and Recommendations for Closure and Realignment (GAO/NSIAD-95-133, Apr. 14, 1995). Military Bases: Challenges in Identifying and Implementing Closure Recommendations (GAO/T-NSIAD-95-107, Feb. 23, 1995). Military Bases: Environmental Impact at Closing Installations (GAO/NSIAD-95-70, Feb. 23, 1995). Military Bases: Reuse Plans for Selected Bases Closed in 1988 and 1991 (GAO/NSIAD-95-3, Nov. 1, 1994). Military Bases: Army’s Planned Consolidation of Research, Development, Test, and Evaluation (GAO/NSIAD-93-150, Apr. 29, 1993). Military Bases: Analysis of DOD’s Recommendations and Selection Process for Closure and Realignments (GAO/T-NSIAD-93-11, Apr. 19, 1993). Military Bases: Analysis of DOD’s Recommendations and Selection Process for Closures and Realignments (GAO/NSIAD-93-173, Apr. 15, 1993). Military Bases: Revised Cost and Savings Estimates for 1988 and 1991 Closures and Realignments (GAO/NSIAD-93-161, Mar. 31, 1993). Military Bases: Observations on the Analyses Supporting Proposed Closures and Realignments (GAO/NSIAD-91-224, May 15, 1991). Military Bases: An Analysis of the Commission’s Realignment and Closure Recommendations (GAO/NSIAD-90-42, Nov. 29, 1989). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on the Department of Defense's (DOD) base realignment and closure (BRAC) process, focusing on: (1) the status and extent of land sales at closing bases; (2) whether private parties are excluded from purchasing surplus property; and (3) the amount of federal assistance provided to communities to promote economic conversion of closing bases. GAO found that: (1) as of March 1996, land sales for the first three BRAC rounds totalled $179.2 million; (2) private parties rarely bid on the purchase of base properties because communities often request these properties under public benefit transfers, economic development conveyances, and noncompetitive negotiated sale authorities; (3) the federal government plans to retain approximately 16 percent of the land from the 23 bases reviewed; (4) although most of the land from these bases will be requested by local reuse authorities, reuse of 15 percent of the land remains undetermined; (5) communities plan to use the land for industrial and office complexes, parks and recreational facilities, residential housing, and correctional facilities; (6) although some bases have been able to generate jobs and revenue by leasing base properties during the conversion process, development and implementation of reuse and disposal plans can be a lengthy process; (7) readily marketable properties require resources for their protection and upkeep; (8) during past BRAC closure rounds, the federal government has provided over $780 million in planning assistance, training, and infrastructure grants to help communities implement their redevelopment objectives; and (9) 21 percent of the 88,433 DOD civilian jobs that were lost as a result of the first three BRAC closure rounds have been replaced.
Following the September 11, 2001, terrorist attacks, DOD realized the need for a more integrated civilian and military response capability for any future attack on the United States. In response, DOD established NORTHCOM in October 2002 to provide command and control in homeland defense efforts and to coordinate defense support of civil authorities within its area of responsibility (see fig. 1). NORTHCOM’s mission consists of (1) homeland defense and (2) civil support. It is important to understand the relationships between NORTHCOM’s missions and homeland security. Homeland defense and homeland security are not synonymous. Homeland security is a concerted national effort to prevent terrorist attacks within the United States, reduce America’s vulnerability to terrorism, and minimize the damage and recover from attacks that do occur. The DHS is the primary federal agency for homeland security issues. DHS’s responsibilities extend beyond terrorism to preventing, preparing for, responding to, and recovering from a wide range of matter domestic disasters and other emergencies. DOD contributes to homeland security through its military missions overseas and homeland defense and civil support operations. While the terrorism portion of homeland security is concerned with preventing terrorist attacks within the United States, DOD’s concerns include responding to conventional and unconventional attacks by any adversary as well as terrorists. When DOD is designated as the primary federal agency by the President or Secretary of Defense for conducting military missions to defend the people or territory of the homeland, it is considered to be homeland defense. Homeland defense is the protection of U.S territory, sovereignty, domestic population, and critical defense infrastructure against external threats and aggression. DOD activity in support of a National Response Framework primary or coordinating agency is considered to be civil support. Civil support is DOD support to U.S. civilian authorities, such as DHS, for domestic emergencies, both natural and man-made, and includes the use of DOD personnel—federal military forces and DOD’s career civilian and contractor personnel—and DOD agency and component resources. Because these missions are complex and interrelated, they require significant interagency coordination. To carry out its homeland defense mission, NORTHCOM is to conduct operations to deter, prevent, and defeat threats and aggression aimed at the United States. According to Joint Publication 3-27, DOD is the primary federal agency for homeland defense operations, and NORTHCOM is the combatant command responsible for commanding and coordinating a response to a homeland defense incident. In this case, the chain of command is relatively straightforward: other DOD commands and federal agencies provide support to NORTHCOM for homeland defense operations (see fig. 2). Although NORTHCOM has few forces assigned to its command, during an incident it requests forces through the Joint Staff. The Joint Staff will direct Joint Forces Command, which is DOD’s joint force provider, to assign appropriate and available forces to NORTHCOM. The President may decide to federalize National Guard units in order to provide these forces. While the states do not have an operational role in homeland defense, NORTHCOM’s homeland defense mission includes protecting the territory or domestic population of the United States as well as the infrastructure or other assets determined by the Secretary of Defense to be critical to national security. In order to protect these critical assets, NORTHCOM must maintain awareness of the environment in which it may be operating, including critical infrastructure locations relevant to its operations. NORTHCOM’s second mission is civil support or defense support of civil authorities. Civil support missions include domestic disaster relief operations for incidents such as fires, hurricanes, floods, and earthquakes. Such support also includes counterdrug operations and management of the consequences of a terrorist incident employing a weapon of mass destruction. DOD is not the primary federal agency for such missions (unless so designated by the President) and thus provides defense support of civil authorities only when (1) state, local, and other federal resources are overwhelmed or unique military capabilities are required; (2) assistance is requested by the primary federal agency; and (3) NORTHCOM is directed to do so by the President or the Secretary of Defense. Civil support is based on a tiered response to an incident; that is, incidents must be managed at the lowest jurisdictional levels and supported by additional response capabilities when needed (see fig. 3). Local and county governments respond to emergencies daily using their own resources and rely on mutual aid agreements and other types of assistance agreements with neighboring governments when they need additional resources. For example, county and local authorities are likely to have the resources needed to adequately respond to a small-scale incident, such as a local flood, and therefore will not request additional resources. For larger-scale incidents, when resources are overwhelmed, local and county governments will request assistance from the state. States have capabilities, such as the National Guard, that can help communities respond and recover. If additional resources are required, the state may request assistance from other states through interstate mutual aid agreements, such as the Emergency Management Assistance Compact (EMAC). If an incident is beyond the community and state capabilities, the governor can seek federal assistance. The federal government has a wide array of capabilities and resources that can be made available to assist state and local agencies to respond to incidents. Overall coordination of federal incident management activities, other than those conducted for homeland defense, is generally the responsibility of DHS. Within DHS and as an executive agent for the National Preparedness System, FEMA is responsible for coordinating and integrating the preparedness of federal, state, local, tribal, and nongovernmental entities. In accordance with the National Response Framework and applicable laws including the Robert T. Stafford Disaster Relief and Emergency Assistance Act (Stafford Act), various federal departments or agencies may play primary, coordinating, or supporting roles, based on their authorities and resources and the nature of the threat or incident. In some instances, national defense assets may be needed to assist FEMA or another agency in the national response to an incident. Defense resources are committed after approval by the Secretary of Defense or at the direction of the President. When deciding to commit defense resources, officials consider military readiness, appropriateness of the circumstances, and whether the response is in accordance with the law. For example, the Posse Comitatus Act allows military forces to provide civil support, but these forces generally cannot become directly involved in law enforcement. When it is determined that defense assistance is appropriate and is requested by FEMA, NORTHCOM is responsible for leading DOD’s response. In the same manner as is applicable to homeland defense, NORTHCOM generally operates through established joint task forces that are subordinate to the command. In most cases, support will be localized, limited, and specific. When the scope of the disaster is reduced to the point where the primary federal agency can again assume full control and management without military assistance, NORTHCOM will exit. In order to prepare for its homeland defense and civil support missions, NORTHCOM has developed plans based on various incident scenarios, including 14 of DHS’s 15 national planning scenarios. NORTHCOM develops contingency plans to outline its role in potential disaster situations. NORTHCOM currently develops strategic-level concept plans rather than more detailed operational plans, because the potential threats that it is planning for are varied and nonspecific, ranging from terrorist threats to hurricanes and wildfires. NORTHCOM uses the adaptive planning process for developing its plans—that is, the joint capability to create and revise plans rapidly and systematically, as circumstances require. Interagency coordination is a key part of the plan development process in adaptive planning. In each state, the National Guard plays a crucial role in preparing for both homeland defense and defense support of civil authorities, in its dual roles as a national reserve force for the Army and Air Force and as a state militia. As the only military force shared by the states and the federal government, the National Guard provides a natural and effective bridge to accomplish collaboration between NORTHCOM and key state partners. The National Guard’s federal mission is to provide trained units available for active duty in the armed forces, in the time of war or national emergency, and at such times as national security may require. NORTHCOM is responsible for the planning, exercising, and command and control of the National Guard for its federal missions conducted under the command and control of the President within its area of responsibility. As a state militia, the National Guard of each state responds to state emergencies, including natural disasters, civil disturbances, and acts of terrorism, and provides support to law enforcement in the war on drugs under the command and control of the state governor. The governor commands the National Guard through the TAG, who heads the joint force headquarters of the state. According to NGB officials, the state’s joint force headquarters’ mission is to maintain trained and equipped National Guard forces and to provide expertise and situational awareness to facilitate the integration of federal and state activities. NGB is a joint activity of DOD, with unique statutory, regulatory, and policy-based responsibilities and authorities, including serving as the official channel of communications between the Departments of the Army and the Air Force and the states on National Guard matters. NGB administers DOD, Department of the Army, and Department of the Air Force policies, programs, and plans pertaining to National Guard matters and facilitates the integration of federal and state activities, including facilitating mutual support among the states. Although NGB does not command or control forces, it assists the states in the organization, maintenance, and operation of their Army National Guard and Air National Guard units located in the states and coordinates the movement of nonfederalized National Guard forces. NGB also maintains and provides information on National Guard matters affecting homeland defense and civil support to the Office of the Secretary of Defense; the combatant commands, including NORTHCOM; and others. During civil support missions, NGB provides policy guidance and facilitates National Guard assistance to the TAGs. Because of their interrelated missions, coordination between NORTHCOM and NGB is critical in planning for homeland defense and civil support. In analyzing the survey results as well as during meetings with NORTHCOM and NGB officials, we found that NORTHCOM has ongoing efforts to improve coordination with the states and NGB in planning for its missions and responding to requests for civil support missions. As part of NORTHCOM’s strategic vision, its goal is to facilitate the synchronization of national, state, and local assets and capabilities to defend the nation and support civilian authorities. We found six areas in which NORTHCOM has ongoing efforts to improve coordination with the states and NGB, ranging from including states in its exercises to a new state engagement strategy for reaching out directly to state leaders. Some of these efforts are intended to help NORTHCOM plan for both of its missions, while others are intended to improve how it responds to requests for civil support. NORTHCOM conducts or participates in training exercises to improve planning for its missions and responses to requests for civil support. The command conducts two large-scale exercises—Ardent Sentry and Vigilant Shield—and participates in over 30 smaller regional, state, and local exercises annually to help potential responders prepare for man-made and natural disasters. Ardent Sentry and Vigilant Shield alternate between emphasizing the homeland defense mission and the civil support mission. Each training event exercises one of the key missions while at the same time including elements of the other. Practicing and training for emergency responses together not only helps to identify problem areas or lessons learned, but also helps state responders to build relationships with NORTHCOM and improve coordination. One TAG told us that he did not have any communications with NORTHCOM prior to his state’s participation in Ardent Sentry, but has since developed a close working relationship with NORTHCOM officials. Table 1 shows the percentage of states participating in Ardent Sentry, Vigilant Shield, or other events, according to our survey of TAGs. NORTHCOM’s Training and Exercise Directorate continues to work with state and National Guard entities to plan and conduct exercises and to develop a robust Vigilant Guard regional exercise program. NORTHCOM has been informally including NGB in reviewing its plans, in the early stages, during concept development workshops, and during final coordination. NGB officials told us that regularly scheduled conferences between the planning directorates at NORTHCOM and NGB have greatly enhanced coordination over the past few months. In addition, NGB officials confirmed that NORTHCOM has been routinely providing its draft plans to them for comment/review to get the National Guard’s perspectives. This review process will be formalized in the next few months, when NGB becomes an officially recognized member of the Joint Planning and Execution Community (JPEC). JPEC coordinates DOD efforts and ensures unity in the planning and execution of joint operations, and includes the Chairman of the Joint Chiefs of Staff, the services, the combatant commands and their component commands, sub-unified commands, joint task forces, and defense agencies. NORTHCOM has also established a Joint Force Orientation Program to improve the states’ knowledge on NORTHCOM’s role in response to requests for civil support. Several TAGs reported that their and their staffs’ involvement in this program enhanced coordination with NORTHCOM. The primary objectives of this program are to facilitate a mutual understanding of joint operational concepts and information sharing between NORTHCOM and the states to help clarify NORTHCOM’s supporting role to the states and to improve overall coordination. At the time of our audit, the Joint Force Orientation Program was organized into three phases. The first phase included an overall command briefing and individual briefings from each NORTHCOM directorate, including discussions about NORTHCOM’s training and exercise programs. The second phase of the Joint Force Orientation Program discussed more in- depth information about NORTHCOM, NGB, and the Joint Forces Command. This phase generally covers warfighter doctrine and operational application, including joint concepts and terminology, joint operational environment, command relationships, joint planning, and joint logistics. The third phase of the program is designed to be state specific, where the process for requesting federal assistance is reviewed and issues—such as intelligence sharing and oversight, mobile communications, planning, and logistics—are discussed in more detail. NORTHCOM provided us with the most recent participation data, and we found that all of the states within NORTHCOM’s area of responsibility have received the first phase of Joint Force Orientation Program training, 46 states have received phase 2 training, and 19 states have received phase 3 training. NORTHCOM told us that it is working toward providing phase 3 training to the remaining states. The willingness of the TAGs to participate and send their staffs to this training shows how useful the information exchanged and relationships developed with NORTHCOM during the training are to them. Other ongoing efforts include NORTHCOM’s weekly teleconferences throughout the hurricane season to coordinate with local, state, and federal partners and discuss potential storms; available resources, including EMAC; and potential needs or unique capabilities that DOD may be asked to provide. For instance, if a hurricane is projected to affect the mid-Atlantic states, officials in those states may inquire about resources potentially needed—such as helicopters, trucks, or other equipment—in advance of the incident, thereby helping affected states to more effectively plan their responses. Similarly, NORTHCOM monitors wildfire activity and sets up teleconferences with the National Interagency Fire Center—which includes state emergency response officials—if it appears NORTHCOM may need to assist in fighting the fires. As a result of this frequent interaction, NORTHCOM has begun to build more productive and effective relationships with the participating states and agencies. As part of the lessons learned from Hurricane Katrina, NORTHCOM has placed a defense coordinating officer (DCO) in each of FEMA’s 10 regional offices and placed greater emphasis on the DCOs’ mission (see fig. 4). DCOs are senior-level military officers with joint experience and training on the National Response Framework, defense support to civil authorities, and DHS’s National Incident Management System. They are responsible for assisting civil authorities when requested by FEMA, providing liaison support and requirements validation, and serving as single points of contact for state, local, and other federal authorities that need DOD support. DCOs work closely with federal, state, and local officials to determine what unique DOD capabilities can be used to assist in mitigating the effects of a natural or man-made disaster. According to TAGs, FEMA, and NORTHCOM officials, placing DCOs in all of the FEMA regional offices and emphasizing the DCOs’ mission has improved NORTHCOM’s relationships and coordination with state and local officials, as well as with FEMA in day-to-day planning and when an incident occurs. For example, in response to FEMA’s request during the California wildfires in October 2007, NORTHCOM’s subordinate command, Army Forces North, deployed the Region 9 DCO to support the Joint Field Office in Pasadena, California and assess and coordinate defense support of civil authorities with FEMA. Based on the requirements identified by state and federal officials in consultation with the DCO, DOD and the National Guard deployed six aircraft equipped with the Modular Air Firefighting System to California to assist in fighting wildfires. U.S. While NORTHCOM relies on NGB as its channel of communications for National Guard matters, NORTHCOM’s commander believes that developing relationships directly with states will contribute to success in saving lives, protecting infrastructure, and promoting a resilient society. NORTHCOM is currently developing a state engagement strategy to build relationships with appropriate state leadership, including governors, TAGs, state homeland security advisors, and emergency managers of major metropolitan areas. As part of this strategy, NORTHCOM’s Commander has personally met with several state governors and TAGs to discuss NORTHCOM’s roles and missions and determine how they can coordinate when responding to an incident. For example, the Commander met with TAGs from the northeast region in November 2007 to discuss both military coordination and interagency coordination for regional domestic operations. The draft strategy also recognizes the importance of NORTHCOM working with the states in close coordination with the organizations such as NGB and DHS/FEMA, which are responsible for coordinating with the states regarding federal matters related to incident management. NGB officials told us that working with the states will provide NORTHCOM with a greater appreciation for the role and authority of the governor and sensitivity to the sovereignty and rights of states. While the strategy is designed to build the relationships needed for national planning and execution, it does not include established and thorough processes for involving states in the development of NORTHCOM’s plans, obtaining state emergency response plans, or facilitating integrated intergovernmental planning. We identified three areas in which there are gaps in coordination with the states and NGB. First, NORTHCOM officials involve the states minimally in the development of NORTHCOM’s major homeland defense and civil support plans, and they are not required to do so. Second, NORTHCOM generally was not familiar with state emergency response plans and capabilities and has no established and thorough process for gaining access to this information. Third, a 2005 memorandum of agreement, which is intended to provide the procedures by which NORTHCOM and NGB interact, does not clearly define each agency’s roles and responsibilities for planning for homeland defense and civil support. Improvements in these areas may help to effectively align NORTHCOM’s efforts with other national efforts, as required by the new annex to Homeland Security Presidential Directive 8 on national planning; help NORTHCOM to manage its overall risk; and better ensure that it will be able to fully respond when called upon to perform either of its missions. Although the majority of TAGs are familiar to varying degrees with NORTHCOM’s homeland defense and defense support to civil authorities plans (see table 2), in our survey less than 25 percent reported that they were involved in developing and reviewing these plans (see table 3). NORTHCOM is not required by DOD specifically to involve states in the development and review of its homeland defense and support of civil authorities plans. However, its strategic vision set forth in its Concept of Operations and the recent annex to Homeland Security Presidential Directive 8 emphasize that plans and capabilities should be synchronized at the national, state, and local levels. According to several TAGs, NORTHCOM should coordinate more with state and local organizations, particularly with the National Guard, to develop a good planning and operational relationship and to enhance the ability of all organizations to plan and respond rapidly in a crisis. We previously reported on the need to include state and local jurisdictions in the development of response plans because they are key stakeholders and would be on the front lines if an incident occurs. In the case of homeland defense, NORTHCOM planners told us that, as the official channel of communication for National Guard related matters, NGB provides the states’ perspectives when commenting on NORTHCOM’s plans. The planners also said that further state involvement in the development of NORTHCOM’s plan is not required because (1) this is a strategic-level concept plan that does not require such detail and (2) NORTHCOM is the lead during a homeland defense incident. NGB officials told us that as requested in NORTHCOM homeland defense plan, they have collected and reviewed states’ supporting homeland defense plans and, to the extent possible, have attempted to represent these perspectives when commenting on NORTHCOM’s homeland defense plan. However, an NGB planning official told us that the states have differing perspectives, and NORTHCOM could better learn about these differences by reviewing the individual state plans. In addition, while NGB provides information of National Guard capabilities, the states may have other capabilities and requirements that NORTHCOM should be aware of. By only relying on NGB, NORTHCOM may not be able to maintain awareness of the environment in which the command may be operating, including critical infrastructure locations relevant to its operations, which is important to fully carrying out its homeland defense mission. In the case of civil support and as outlined in the National Response Framework, because NORTHCOM plays a supporting role to other federal agencies and subsequently to state and local governments, NORTHCOM officials told us that they are starting to reach out directly to states to obtain their perspectives and incorporate these into future revisions of NORTHCOM’s defense support of civil authorities plan. In order to effectively develop a civil support plan, NORTHCOM needs to know what its requirements may be. DOD recognizes that these requirements are driven both by the capabilities gaps of the primary federal agencies and those of the state and local governments. In either homeland defense or civil support, increasing the current level of state involvement in the development of NORTHCOM’s plans could help integrate intergovernmental planning for catastrophic incidents, enhance overall coordination, and help ensure that NORTHCOM’s plans for its missions and responses to incidents are as effective as possible. We found that NORTHCOM generally was not familiar with state emergency response plans and has not obtained detailed information on states’ plans and capabilities to determine the specific challenges it may face in conducting homeland defense or civil support operations. According to our survey, 54 percent of the TAGs believe that NORTHCOM is not at all or only slightly familiar with their states’ plan (see table 4). In written comments in our survey, several TAGs reported that NORTHCOM should be more familiar with state emergency response plans, and should determine how best to support the states’ plans and, where appropriate, incorporate these plans to ensure a unified effort. Developing a synchronized and coordinated planning capability at all levels of government is important for a coordinated national response to domestic incidents. In part, NORTHCOM is not more familiar with these plans because it has no established and thorough process regarding coordination with the states or for gaining access to emergency response plans, and it is not specifically required by DOD to obtain information on state emergency response plans or determine state and local capabilities and potential resource gaps. NORTHCOM planners told us that they do not need access to state emergency response plans because they are doing strategic-level concept plans and this level of detail would be more appropriate for tactical level planning, such as planning done by NORTHCOM’s subordinate commands, like Army Forces North. However, NGB and FEMA officials told us that one of NORTHCOM’s biggest challenges is its current inability to anticipate the capabilities and requirements of state and local governments during a civil support incident because of the lack of advanced planning and coordination between NORTHCOM, states, and local governments. Furthermore, NORTHCOM officials told us that the complexity of the planning involved for a large-scale disaster is such that even if states can adequately plan for the resources they will need, they do not always have adequate multistate plans to integrate the state, local, federal, and nongovernmental responses. By not obtaining and using information on state plans and capabilities, NORTHCOM increases the risk that it will not be adequately prepared to respond to an incident with the needed resources, including the types, numbers, and timing of capabilities (trained personnel and equipment). One of NORTHCOM’s subordinate commands, JTF-CS, has been collecting state emergency response plans so that if called upon to provide assistance in a chemical, biological, radiological, nuclear, or high-yield explosive (CBRNE) incident, its Commander will have as much advance information as possible regarding state plans, resources, and potential areas where assistance may be required. JTF-CS found that some state and local governments are reluctant to share their plans because they fear that DOD will “grade” their plans or that potential capability gaps will be made public, with an accompanying political cost. A NORTHCOM official told us that there will always be some tension between the states and DOD and other federal agencies as a result of the nation’s constitutional structure. JTF-CS is therefore extremely careful about how it shares its emergency plan analyses and has made progress in gaining access to these plans through DHS. DHS has collected and assessed state emergency response plans as part of a nationwide plan review to determine the status of catastrophic planning for states and 75 of the nation’s largest urban areas. Participation in the review was a prerequisite for receipt of fiscal year 2006 DHS homeland security grant funds. The review concluded that no individual plan or resource base can fully absorb and respond to a catastrophe and that unsystematic planning and the absence of an integrated planning system is a national operational vulnerability. The annex to Homeland Security Presidential Directive 8, issued in December 2007, directs the establishment of a comprehensive approach to national planning through an integrated planning system. This system is to include, among other things, a description of the process that (1) links regional, state, local, and tribal plans, planning cycles, and processes and allows these plans to inform the development of federal plans and (2) fosters the integration of such plans and allows for state, local, and tribal capability assessments to feed into federal plans. DHS may, therefore, be one source from which NORTHCOM could obtain information on state emergency response plans and capabilities. Given its relationship with the states, NGB could also be a conduit for NORTHCOM to share its plans with states and obtain information on states plans and capabilities. In addition, NGB officials suggested that emergency preparedness liaison officers (EPLO) could be a potential conduit for NORTHCOM and states to share plans. EPLOs are senior reserve officers from the Army, Navy, Air Force, and Marine Corps who represent the federal military in each state and in each of the 10 FEMA regional offices. EPLOs coordinate the provision of military personnel, equipment, and supplies to support the emergency relief and cleanup efforts of civil authorities. According to NGB officials, expanding the EPLO program to include sharing plans with states would provide a closer link between NORTHCOM and the states without the sensitivity of state sovereignty issues. The DCOs who are also located in FEMA’s 10 regional offices could potentially serve as NORTHCOM’s points of contact for the EPLOs. NORTHCOM has taken actions to improve the coordination of its homeland defense and civil support plans and operations with federal agencies. However, in its role either in support of other federal agencies or as the primary agency in homeland defense incidents, NORTHCOM does not have adequate information on states’ plans and capabilities. By minimally involving the states in its homeland defense and civil support plans and not becoming familiar with information on states’ emergency response plans and capabilities, NORTHCOM increases the risk that it may not be prepared with the needed resources to respond to an incident. These gaps may be attributable in part to the fact that NORTHCOM does not have an established and thorough process for cooperating and interacting with the states. One model of such a process is NORTHCOM’s security cooperation plans with Canada and Mexico, since the states are each separate governments within the federal system. For example, NORTHCOM’s cooperation plan with Canada and Mexico outlines a strategy for planning, assessing, and executing security objectives and other strategic priorities. These objectives include advancing common interests, reducing impediments to cooperation, encouraging improved capabilities and willingness to operate in coalition, and improving combined homeland defense capabilities. Without a similar kind of cooperation plan for the states within its area of responsibility, NORTHCOM cannot optimally involve the TAGs and other state or local officials in its planning activities and develop a process for obtaining and using information on state emergency plans and capabilities. Moreover, without such a cooperation plan, NORTHCOM is not likely to reduce confusion, facilitate effective planning, and facilitate effective and efficient responses to incidents. DHS’s National Response Framework and NORTHCOM’s Concept of Operations both emphasize that NORTHCOM should coordinate with federal, state, and local partners before, during, and after an incident. Coordination with NGB is particularly important, because NGB has experience working with state and local authorities during incidents and it functions as NORTHCOM’s formal link to the states. We previously reported that as with preparing for and responding to any type of disaster, leadership roles and responsibilities must be clearly defined, effectively communicated, and well understood to facilitate rapid and effective decision making. Furthermore, we reported that without clearly defined roles and responsibilities, the potential remains for confusion and gaps or duplication by the combatant commands relative to other agencies. The National Strategy for Homeland Security also emphasizes that a lack of clarity regarding roles and responsibilities across all levels of government can lead to gaps in the national response and delay the ability to provide life-saving support when needed. In July 2005, NORTHCOM and NGB signed a memorandum of agreement outlining their command and coordination relationship. This memorandum, which is intended to provide the procedures by which the two entities interact, broadly establishes that NORTHCOM and NGB “will coordinate on policy, program and planning actions related to missions and requirements affecting the National Guard.” The memorandum further provides for the location of a small NGB office at NORTHCOM to advise NORTHCOM’s Commander regarding National Guard-related issues. The mission of this office is to advise and assist NORTHCOM’s Commander on all matters involving the National Guard, provide a conduit to NGB leaders and staff, and promote integration of National Guard priorities and capabilities into NORTHCOM’s plans and operations. The staff members of this office provide input to numerous requests for information from NORTHCOM. This office is not intended to serve as the only point of coordination between NORTHCOM and NGB. Officials told us that there is no formal process in place for the NORTHCOM National Guard Office to coordinate with NGB headquarters. Such a process could improve coordination between the NGB liaison office and NGB headquarters. Our analysis of the memorandum, NORTHCOM’s Concept of Operations, the regulation describing the organization and function of NGB, and other documents showed that there is no detailed guidance on NORTHCOM’s and NGB’s roles and responsibilities for homeland defense and defense support of civil authorities. Clearly defined responsibilities help to ensure unity of effort, prevent duplication, and enable efficient use of resources. As a result of the lack of clearly defined roles and responsibilities between NORTHCOM and NGB, we found several instances in which there was confusion and duplicate or potentially wasted efforts. For example, some TAGs survey responses indicated that because responsibilities are not clearly defined, both NGB and NORTHCOM are requesting the same information during an incident. In addition, NORTHCOM’s homeland defense plan required NGB to collect state homeland defense plans and make them available to NORTHCOM. NGB compiled and reviewed these plans from the states and territories within NORTHCOM’s area of responsibility and made them available to NORTHCOM on its Web portal. However, NORTHCOM planning officials told us that they did not request that NGB compile these plans and that, in fact, they do not have a need for state supporting plans because such plans will not affect how NORTHCOM’s strategic-level homeland defense concept plan is written. Nevertheless, NGB spent resources collecting information that has not been used by NORTHCOM. As discussed above, we believe NORTHCOM officials should be reviewing these plans to ensure that they have sufficient awareness of the environment in which they may be operating to fully carry out the command’s homeland defense mission. In addition, we found that NGB has developed a Joint Capabilities Database that includes all National Guard capabilities and has made this database available for NORTHCOM’s use. However, NORTHCOM officials told us that rather than use the database, they prefer to rely on NGB staff to provide them National Guard readiness and capabilities data. NGB officials also told us that they have not encouraged NORTHCOM to use the database thus far because they are still finalizing the procedure for maintaining and updating the database with information from all states and territories. The officials said that they expect to have these issues worked out within the next few months in advance of the hurricane season and to begin to encourage NORTHCOM to make use of the database. NGB’s goal with the database is to provide a national look at the National Guard’s capabilities. Without clearly defined lines of coordination and roles and responsibility, federal efforts may not be used in the most effective and efficient manner. This is increasingly important as DOD is currently developing a database of federal emergency response capabilities, including those for active and reserve DOD units and National Guard capabilities in each state, and FEMA is currently developing a list of organizations and functions within DOD that may be used to provide support to civil authorities during natural or man-made disasters. Coordinating all of these efforts will be critical to ensuring the efficient use of federal resources as well as to reduce the risk of potential capabilities gaps. An NGB official told us that NORTHCOM and NGB have not revised the 2005 memorandum of agreement to more clearly define their responsibilities because they were waiting for the National Guard Empowerment Act, which was partially incorporated into the National Defense Authorization Act for 2008, to be signed into law and, subsequently, for a new NGB charter to be developed and issued by the Secretary of Defense. The National Guard Empowerment Act includes provisions that may enhance the level of coordination between NORTHCOM and NGB. For example, the Secretary of Defense is required to prepare a plan coordinating the use of the National Guard and members of the armed forces on active duty when responding to an incident and include protocols for DOD, NGB, and the governors of the states to carry out operations in coordination with one another. An NGB official told us that the process of preparing this plan will require more coordination between NORTHCOM and NGB. More important, NGB’s charter, which is currently undergoing revision based on the new act, potentially could resolve a number of ambiguities by more clearly defining the roles and responsibilities of NGB and its relationships with other agencies, such as NORTHCOM. Further, the NGB official told us that the revised charter will greatly simplify negotiations of a revised memorandum between the two agencies. Without clearly defined responsibilities for NORTHCOM and NGB, there is the potential for a lack of effective coordination between the two agencies and duplicative or wasted efforts. Clearly identifying roles and responsibilities is increasingly important because responding to a major disaster in the United States—natural or man-made—is a shared responsibility of many agencies across all levels of government and cannot be effectively accomplished by one agency. Without effective interagency coordination and planning and clearly defined roles and responsibilities, there is a risk that NORTHCOM’s, NGB’s, and other nationwide efforts to respond to an incident may be fragmented and uncoordinated, such as in the aftermath of Hurricane Katrina. Within the federal government, there is an increasing realization that the nation needs to integrate not just the response to an incident, but also the plans of many entities at all the levels involved in responding to such incidents. This planning integration will help ensure that when the federal government responds, its response will be as effective as possible. NORTHCOM’s recent efforts to coordinate with states and NGB have helped address some of the uncertainty in the homeland defense and civil support planning process and have improved NORTHCOM’s ability to coordinate in the event of an actual incident. However, without an established and thorough process for requesting, obtaining, and using information on state emergency plans and capabilities—whether from coordination with DHS or NGB or from direct interaction with states— NORTHCOM may be missing opportunities to better plan its missions and manage its risk in a more informed manner. Moreover, NORTHCOM may not be fully prepared to support states, resulting in ineffective planning and fragmented, uncoordinated responses to incidents. Given that NORTHCOM and NGB both have increasingly important responsibilities for homeland defense and defense support of civil authorities, it is imperative that these entities work together to effectively prepare for DOD’s response to an incident. Without fully and clearly defined responsibilities for NORTHCOM and NGB, confusion and duplicative or potentially wasted efforts may result, causing an inefficient use of DOD resources during a time of increased military operations and a growing fiscal imbalance. Further, without clear guidance on their responsibilities, the risk is increased that these agencies’ responses to an incident may be ineffective and inefficient, potentially increasing response time and risking the safety of the U.S. population and infrastructure. To improve NORTHCOM’s coordination with the states, we recommend that the Secretary of Defense direct NORTHCOM to develop an established and thorough process to guide its coordination with the states, including provisions for involving the states in NORTHCOM’s planning processes, obtaining information on state emergency response plans and capabilities, using such information to improve the development and execution of its concept plans. To improve NORTHCOM’s coordination with NGB, we recommend that the Secretary of Defense direct NORTHCOM and NGB to revise the memorandum of agreement or develop an alternate document to include fully and clearly defined roles and responsibilities for NORTHCOM, NGB, and the NORTHCOM National Guard Office. In comments on a draft of this report, DOD generally agreed with the intent of our recommendations and discussed steps it is taking and planning to take to address the recommendations. DOD and FEMA also provided technical comments, which we have incorporated into the report where appropriate. In response to our recommendation that NORTHCOM develop an established and thorough process to guide its coordination with the states, DOD agreed that such a process should be developed to guide the coordination between local, state, and federal governments. Homeland Security Presidential Directive 8, Annex 1 requires that DHS develop an integrated planning system consisting of a synchronized system of plans that integrates federal, state, and local operational capabilities to affect a coordinated national response. DOD told us that the Office of the Assistant Secretary of Defense for Homeland Defense and Americas’ Security Affairs, the Joint Staff, NORTHCOM, and NGB are currently in coordination with DHS in developing the integrated planning system. We believe that developing this system would meet the intent of our recommendation if it provides NORTHCOM with an established and thorough process for requesting, obtaining, and using information on state emergency plans and capabilities and improves the development and execution of their concept plans, thereby helping NORTHCOM to manage its risk in a more informed manner. DOD agreed with our recommendation that NORTHCOM and NGB revise their memorandum of agreement or develop an alternate document to include fully and clearly defined roles and responsibilities for NORTHCOM, NGB, and the NORTHCOM National Guard Office and stated that a revision to the memorandum is currently being coordinated. We believe that providing clear guidance on roles and responsibilities will help to ensure that these agencies’ responses to an incident will be effective and efficient, potentially reducing response time and enhancing the safety of the U.S. population and infrastructure. DOD’s written comments are reprinted in appendix III. We are sending copies of this report to the Secretary of Defense and other interested parties. We will also make copies available to others on request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-5431 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. In conducting this review, we focused our scope on U.S. Northern Command’s (NORTHCOM) coordination with the states and the National Guard Bureau (NGB). We excluded NORTHCOM’s coordination with other federal agencies and organizations and nongovernmental organizations because this is addressed in a companion report. Our review focused on NORTHCOM’s coordination efforts occurring since Hurricane Katrina in 2005. In addressing our objectives, we interviewed and obtained information and related documents from officials at the following locations: NORTHCOM Headquarters, Peterson Air Force Base, Colorado Springs, Joint Forces Command, Norfolk, Virginia The Office of the Secretary of Defense, Washington, D.C. The Joint Staff, Washington, D.C. Joint Task Force-Civil Support, Fort Monroe, Virginia U.S. Army North, Fort Sam Houston, San Antonio, Texas Joint Force Headquarters National Capitol Region, Fort McNair, Washington, D.C. NGB, Arlington, Virginia Department of Homeland Security (DHS), Washington, D.C. U.S. Coast Guard Headquarters, Washington, D.C. Federal Emergency Management Agency (FEMA), Washington, D.C. We also conducted semistructured telephone interviews with the state adjutants general, also known as TAGs, from Florida, Indiana, Nebraska, and Washington. To determine the extent to which NORTHCOM is coordinating with the states, we surveyed the TAGs who are within NORTHCOM’s area of responsibility. We asked respondents about their familiarity and involvement in NORTHCOM’s homeland defense plan and the defense support of civil authorities plan. We also asked about their experiences in working and communicating with NORTHCOM, including their participation in NORTHCOM exercises and involvement of NORTHCOM in their state exercises. The questionnaire and survey responses can be found in appendix I. We sent a questionnaire to TAGs of all 49 states in NORTHCOM’s area of responsibility and the District of Columbia. The self-administered electronic survey was sent via electronic mail to the TAGs and their chiefs of staff. More specifically, we sent the questionnaire by e-mail in an attached Microsoft Word form that respondents could return electronically after marking checkboxes or entering narrative responses into open answer boxes. Alternatively, respondents could return it by mail after printing the form and completing it by hand. We sent the original electronic questionnaire on April 4, 2007. We sent out reminder e-mail messages, with replacement surveys, at different time intervals to all nonrespondents in order to encourage a higher response rate. In addition, we made several courtesy telephone calls to nonrespondents to encourage their completion. All questionnaires were returned by September 19, 2007. We achieved a 100 percent response rate. The survey used was not a sample survey because it included the universe of respondents. Therefore, the survey has no sampling errors. However, the practical difficulties of conducting any survey may introduce errors, commonly referred to as nonsampling errors. For example, difficulties in interpreting a particular question, sources of information available to respondents, or entering data into a database or analyzing them can introduce unwanted variability into the survey results. We took steps in developing the questionnaire, collecting the data, and analyzing them to minimize such nonsampling errors. For example, social science survey specialists designed the questionnaire in collaboration with GAO staff who had subject matter expertise. In addition to an internal expert technical review by GAO’s Survey Coordination Group, we pretested the survey with two TAGs by telephone to ensure that the questions were relevant, clearly stated, and easy to understand. Since there were relatively few changes based on the pretests and we were conducting surveys with the universe of respondents, we did not find it necessary to conduct additional pretests. Instead, changes to the content and format of the questionnaire were made after the pretests based on the feedback we received. When we analyzed the data, an independent analyst checked all computer programs. All data were double keyed during the data-entry process, and GAO staff verified a sample of the resulting data to ensure accuracy. In addition to analyzing the frequency and distribution of marked checkbox survey responses, we also analyzed the open-ended narrative survey responses for trends and recurring themes. For instance, although we did not directly ask a question about the defense coordinating officers (DCO) now located in each FEMA region, the DCOs were cited several times by TAGs as improving their communications with NORTHCOM. When the TAGs were not in agreement or had different perspectives on issues, we also summarized conflicting responses to illustrate the complexity of NORTHCOM’s unique relationship with the states and any ongoing efforts to resolve these issues. For example, some TAGs believed that NGB should be the state’s primary channel of communication with NORTHCOM, but others disagreed. To determine the extent to which NORTHCOM is coordinating with the states and NGB, we reviewed plans, guidance, and other documents, including the memorandum of agreement between NORTHCOM and NGB. In addition, we conducted semistructured interviews with officials from NORTHCOM and several of its subordinate commands, including the Joint Task Force-Civil Support, Joint Force Headquarters National Capitol Region, and Army Forces North, as well as officials from NGB headquarters and the NORTHCOM National Guard Office. We also conducted interviews with the officials from FEMA and DHS’s interagency Incident Management Planning Team. Additionally, we observed a major exercise (Ardent Sentry/Northern Edge) in the Indianapolis area in May 2007. We conducted our review from April 2007 to April 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Lorelei St. James, Assistant Director; Yecenia Camarillo; Joanna Chan; Angela Jacobs; David Keefer; Joseph Kirschbaum; Joanne Landesman; Erin Noel; Terry Richardson; and Jena Whitley made key contributions to this report. Homeland Defense: U.S. Northern Command Has Made Progress but Needs to Address Force Allocation, Readiness Tracking Gaps, and Other Issues. GAO-08-251. Washington, D.C.: April 16, 2008. Homeland Security: DHS Improved its Risk-Based Grant Programs’ Allocation and Management Methods, But Measuring Programs’ Impact on National Capabilities Remains a Challenge. GAO-08-488T. Washington, D.C.: March 11, 2008. Department of Homeland Security: Progress Made in Implementation of Management and Mission Functions, but More Work Remains. GAO-08-457T. Washington, D.C.: February 13, 2008. Influenza Pandemic: Opportunities Exist to Address Critical Infrastructure Protection Challenges That Require Federal and Private Sector Coordination. GAO-08-36. Washington, D.C.: October 31, 2007. Homeland Security: Preliminary Information on Federal Actions to Address Challenges Faced by State and Local Information Fusion Centers. GAO-07-1241T. Washington, D.C.: September 27, 2007. Influenza Pandemic: Opportunities Exist to Clarify Federal Leadership Roles and Improve Pandemic Planning. GAO-07-1257T. Washington, D.C.: September 26, 2007. Department of Homeland Security: Progress Report on Implementation of Mission and Management Functions. GAO-07-1240T. Washington, D.C.: September 18, 2007. Homeland Security: Observations on DHS and FEMA Efforts to Prepare for and Respond to Major and Catastrophic Disasters and Address Related Recommendations and Legislation. GAO-07-1142T. Washington, D.C.: July 31, 2007. Influenza Pandemic: DOD Combatant Commands’ Preparedness Efforts Could Benefit from More Clearly Defined Roles, Resources, and Risk Mitigation. GAO-07-696. Washington, D.C.: June 20, 2007. Reserve Forces: Actions Needed to Identify National Guard Domestic Equipment Requirements and Readiness. GAO-07-60. Washington, D.C.: January 26, 2007. Chemical and Biological Defense: Management Actions Are Needed to Close the Gap between Army Chemical Unit Preparedness and Stated National Priorities. GAO-07-143. Washington, D.C.: January 19, 2007. Reserve Forces: Army National Guard and Army Reserve Readiness for 21st Century Challenges. GAO-06-1109T. Washington, D.C.: September 21, 2006. Catastrophic Disasters: Enhanced Leadership, Capabilities, and Accountability Controls Will Improve the Effectiveness of the Nation’s Preparedness, Response, and Recovery System. GAO-06-618. Washington, D.C.: September 6, 2006. Coast Guard: Observations on the Preparation, Response, and Recovery Missions Related to Hurricane Katrina. GAO-06-903. Washington, D.C.: July 31, 2006. Homeland Defense: National Guard Bureau Needs to Clarify Civil Support Teams’ Mission and Address Management Challenges. GAO-06- 498. Washington, D.C.: May 31, 2006. Hurricane Katrina: Better Plans and Exercises Need to Guide the Military’s Response to Catastrophic Natural Disasters. GAO-06-808T. Washington, D.C.: May 25, 2006. Hurricane Katrina: Better Plans and Exercises Needed to Guide the Military’s Response to Catastrophic Natural Disasters. GAO-06-643. Washington, D.C.: May 15, 2006. Hurricane Katrina: GAO’s Preliminary Observations Regarding Preparedness, Response, and Recovery. GAO-06-442T. Washington, D.C.: March 8, 2006. Emergency Preparedness and Response: Some Issues and Challenges Associated with Major Emergency Incidents. GAO-06-467T. Washington, D.C.: February 23, 2006. Reserve Forces: Army National Guard’s Role, Organization, and Equipment Need to be Reexamined. GAO-06-170T. Washington, D.C.: October 20, 2005. Homeland Security: DHS’ Efforts to Enhance First Responders’ All- Hazards Capabilities Continue to Evolve. GAO-05-652. Washington, D.C.: July 11, 2005. Reserve Forces: Actions Needed to Better Prepare the National Guard for Future Overseas and Domestic Missions. GAO-05-21. Washington, D.C.: November 10, 2004. Reserve Forces: Observations on Recent National Guard Use in Overseas and Homeland Missions and Future Challenges. GAO-04-670T. Washington, D.C.: April 29, 2004. Homeland Security: Selected Recommendations from Congressionally Chartered Commissions. GAO-04-591. Washington, D.C.: March 31, 2004. Homeland Defense: DOD Needs to Assess the Structure of U.S. Forces for Domestic Military Missions. GAO-03-670. Washington, D.C.: July 11, 2003. Combating Terrorism: Selected Challenges and Related Recommendations. GAO-01-822. Washington, D.C.: September 20, 2001.
In 2002, the Department of Defense (DOD) established U.S. Northern Command (NORTHCOM) to conduct homeland defense and civil support missions on U.S. soil. It is particularly important that NORTHCOM coordinate with the National Guard Bureau (NGB), because NGB has experience dealing with state and local authorities during incidents and functions as NORTHCOM's formal link to the states. GAO was asked to (1) determine the extent to which NORTHCOM has ongoing efforts to coordinate with the states and NGB in planning, exercises and other preparedness activities and (2) identify the extent to which there are any gaps in this coordination. To do this, GAO surveyed the state adjutants general, the highest ranking guardsman in each state, and received a 100 percent response rate, and reviewed interagency coordination plans and guidance. NORTHCOM has several ongoing efforts to improve coordination with the states and NGB in planning for its missions and responding to requests for civil support. For example, during hurricane season NORTHCOM facilitates weekly conferences with the relevant local, state, and federal emergency management officials, through which it has begun to build more productive relationships. NORTHCOM also conducted two large-scale exercises and participated in over 25 smaller regional, state, and local exercises annually to help responders prepare for man-made and natural disasters. In addition, NORTHCOM has been informally including NGB in reviewing its plans. We identified gaps in coordination between NORTHCOM, the states, and NGB in three areas: (1) NORTHCOM officials minimally involved the states in the development of its homeland defense and civil support plans. Less than 25 percent of the state adjutants general reported that they were involved in developing and reviewing these plans. For civil support, NORTHCOM officials told us that they are reaching out directly to states to better understand states' plans and capabilities, but for homeland defense, they rely on NGB to provide states perspectives. (2) NORTHCOM was not familiar with state emergency response plans and has no process for obtaining this information. Fifty-four percent of the state adjutants general reported that they believed that NORTHCOM was not at all or only slightly familiar with their states' emergency response plans. This may be attributable, in part, to the fact that NORTHCOM does not have an established and thorough process for cooperating and interacting with the states. By not obtaining and using information on states' plans and capabilities, NORTHCOM increases the risk that it will not be prepared to respond to an incident with the needed resources to support civil authorities. (3) A 2005 agreement, which is intended to provide the procedures by which NORTHCOM and NGB interact, does not fully or clearly define each agency's roles and responsibilities for planning for homeland defense and civil support. The lack of clearly defined roles and responsibilities has resulted in confusion and duplicative or wasted efforts. For example, as required in NORTHCOM's homeland defense plan, NGB compiled the states' homeland defense plans and made them available to NORTHCOM; however, NORTHCOM planners told us that they neither requested nor needed access to this information. Without clearly defined roles and responsibilities, there is a risk that NORTHCOM's and NGB's responses to an event could be fragmented and uncoordinated. Addressing these gaps could help integrate intergovernmental planning for catastrophic incidents, enhance overall coordination, and help ensure that NORTHCOM's plans for its missions and responses to incidents are as effective as possible.
GPOs are organizations that act as purchasing intermediaries that negotiate contracts between health care providers and vendors of medical products and services, including manufacturers, distributors, and other suppliers. The intent of GPOs is to save their customers money by pooling their purchases in order to obtain lower prices and by taking on the administrative burden of negotiating contracts with vendors. Through GPO-negotiated contracts, health care providers can purchase products from vendors, including medical devices, commodities, branded drugs, and generic drugs, as well as services, such as laundry and food services. The Healthcare Supply Chain Association (HSCA)—a trade association representing 14 healthcare GPOs—estimates that U.S. hospitals use, on average, 2 to 4 GPOs per facility, and nearly every hospital in the United States—approximately 96 percent to 98 percent— purchases through GPO contracts. According to HSCA, the first GPO was established in 1910 by the Hospital Bureau of New York, and by the 1980s, there were more than 100 GPOs. While over 600 GPOs in various markets are currently active in the United States, a relatively small number of GPOs dominate the healthcare market for products and services sold through GPO contracts. According to HSCA, GPOs vary in size, type of ownership, and the contracting services they offer their customers. For example, some GPOs are owned by hospitals, while others are not. operate nationally, while others operate regionally to negotiate contracts with local vendors. serve not-for-profit hospitals, others serve for-profit hospitals, and some serve both. offer a broad portfolio of products and services, while others focus on specific product categories or certain types of health care, such as long-term care. In recent years, the GPO market has become more consolidated as some large GPOs have merged. The five largest national GPOs have reported contracting for a similar, broad portfolio of products, including, for example, commodities such as cotton balls and bandages, devices such as pacemakers and stents, and branded and generic drugs. During fiscal year 2012, the 5 largest GPOs by purchasing volume reported a total purchasing volume of $130.7 billion. During the contracting process for products and services, GPOs negotiate the payment of administrative fees by the vendor to the GPO. In addition to using these administrative fees to cover operating expenses, GPOs may distribute a portion of the fees to their health care provider customers or use them to finance other ventures, such as investing in other companies. GPOs may also use administrative fees to fund additional services outside of group purchasing for their customers, which can include custom contracting; services related to product evaluation, such as clinical evaluation and standardization of products; assessments of new technology; benchmarking data services; and marketing and insurance services. (See fig. 1.) HHS’s Office of the Inspector General (HHS-OIG) is responsible for enforcing the Anti-Kickback statute. The Anti-Kickback statute, originally enacted in 1972 and amended over the years, generally prohibits the knowing or willful receipt or payment of fees to induce or reward the purchase of an item or service for which payment may be made under a federal health care program. According to HHS-OIG, the main purpose of the Anti-Kickback statute is to protect patients and federal health care programs, including Medicare, from fraud and abuse by curtailing the corrupting influence of money on health care decisions. In 1986 Congress added a “safe harbor” provision to the Anti-Kickback statute to allow for fees paid by vendors to a GPO.issued a regulation establishing the requirements that GPOs must meet in In addition, in 1991, HHS-OIG order to qualify for safe harbor protection under the Anti-Kickback statute. Under the regulation, a GPO must have a written agreement with its customers either stating that the contract administrative fees are to be 3 percent or less of the purchase price, or specifying the amount or maximum amount that each vendor will pay; and disclose in writing to each customer, at least annually, and to the Secretary of HHS upon request, the amount of contract administrative fees received from each vendor with respect to purchases made by or on behalf of the customer. The GPO safe harbor statutory provision and regulation do not require HHS-OIG to routinely review or monitor GPO written agreements and disclosures. However, HHS-OIG has the authority to investigate potential violations of the Anti-Kickback statute. HHS-OIG also has the authority to impose administrative penalties, including civil money penalties, and exclusion from federal health care programs on GPOs that violate the statute. HHS-OIG also may refer such violations to DOJ, which in turn may bring criminal and civil actions against GPOs that it determines to have violated the Anti-Kickback statute. HHS-OIG does not have general oversight authority over GPOs because GPOs do not directly participate in Medicare and, therefore, do not enter provider agreements with the Centers for Medicare & Medicaid Services (CMS)—a component of HHS. In 2012, we found that, according to officials from HHS-OIG, the office had not routinely exercised its authority to request and review disclosures related to GPOs’ administrative fees, but it had collected information on GPOs’ administrative fees while conducting audits of hospitals’ cost reports. The provision and receipt of discounts, rebates, and net revenue distributions by GPOs to hospitals is protected from prosecution under the Anti-Kickback statute by another provision—known as the “discount safe harbor.” Specifically, a discount or other reduction in price obtained by a Medicare or Medicaid provider is protected from prosecution if the reduction in price is properly disclosed and appropriately reflected in the provider’s Medicare, or applicable state Medicaid, cost report. HHS-OIG conducted two audits in 2005 in which it reviewed the administrative fees that six national GPOs received from vendors and how selected customers of the GPOs accounted for revenue distributions from the GPOs on their Medicare cost reports. The cost reports are used, in part, to set hospital payment rates for Medicare. HHS-OIG found that some of the GPO customers did not fully account for revenue distributions from the GPOs on their Medicare cost reports.recommended that CMS provide specific guidance on the proper treatment of revenue distributions received from GPOs on Medicare cost reports. In December 2011, CMS issued an update to its provider manual specifying that these distributions must be properly accounted for on the cost reports. DOJ and FTC are responsible for enforcing federal antitrust laws, which GPOs are required to follow. investigate a GPO’s potential violation of federal antitrust laws, identified either through a complaint filed with the agencies, through notification of a merger, or through information obtained through the agencies’ own efforts. The agencies have the authority to resolve violations in a number of ways ranging from compliance under a consent order, to an administrative complaint, to filing a criminal or civil suit. In addition to its antitrust enforcement authority, DOJ also has the authority to bring criminal and civil actions against GPOs that it determines to have violated the Anti-Kickback statute. The Sherman Act is enforced by DOJ and prohibits restraints of trade and monopolization. See 15 U.S.C. §§ 1-7. The Federal Trade Commission Act, enforced by FTC, bans unfair methods of competition and unfair or deceptive acts or practices. See 15 U.S.C. §§ 41-58. The Clayton Act, jointly enforced by DOJ and FTC, regulates mergers and acquisitions, among other things, and gives DOJ and FTC, under the Hart-Scott- Rodino Amendments to the Clayton Act, the authority to review certain proposed mergers before they occur. See 15 U.S.C. §§ 12-27. In 2012, we found that DOJ and FTC had investigated complaints against GPOs. We identified one lawsuit filed by DOJ against a GPO, while FTC officials told us the agency had not taken any enforcement action against a GPO since 2004. Officials said that while FTC has investigated GPOs to determine whether their behavior was anticompetitive, the agency has not brought any cases to court or issued any consent orders. An FTC official told us that in order to take enforcement action against a GPO, FTC would need to determine that a GPO violated the law and an enforcement action was in the public interest. According to the GPOs in our review, GPO contracting generally involves three phases: (1) issue requests for proposals (RFP) or invitations for vendors to competitively bid for a contract, (2) review proposals, and (3) negotiate and award contracts. (See fig. 2.) Issue RFPs. Representatives from all five GPOs in our review reported generally issuing RFPs as part of an open bidding process for products and services to place on contract. Issuing RFPs includes notifying vendors, and publicly posting information such as bid calendars, minimum requirements for vendors, and criteria that the GPOs will weigh when considering competing proposals. All five GPOs in our review have posted on their websites information about the minimum requirements that vendors must meet. For example, one GPO’s website states that vendors must be the original equipment manufacturer or demonstrate an exclusive marketing relationship for the products included in the RFP, among other things. Another GPO specifies meeting minimum levels of product quality, durability, and cost-effectiveness, as well as requirements for the financial stability and long-term viability of the vendor. A sample RFP provided by a GPO states that during the competitive bidding process, it will consider a vendor’s product capabilities, maintenance, and ability to upgrade, as well as pricing and other financial factors. Four of the five GPOs in our review reported that under certain limited circumstances, they may award contracts to vendors without issuing RFPs. For example, these “non-bid” contracts may be awarded to vendors that present a proprietary, patented, or innovative product; if a small group of customers request a local or regional vendor contract; or if a product supply shortage or other unique circumstances arise. The fifth GPO reported that all contracts are awarded through a competitive bidding process, even if there is only one bidder. A representative from one generic drug manufacturer stated that, while there is not much opportunity for innovation in the generic drug market, GPOs will award contracts outside of the three-phased competitive bidding process to vendors that have innovative packaging—such as flip-top vials versus a pre-mixed bag—if it benefits their customers. A representative from this manufacturer stated that GPO contracts with vendors generally contain provisions that the GPOs have the right to add additional vendors of the same product if the other vendor has innovative packaging. Review proposals. All five GPOs in our review reported considering multiple aspects of a vendor and product when reviewing proposals, including weighing financial and nonfinancial criteria, and then scoring competing vendors in order to inform their contracting decisions.example, one GPO reported reviewing aspects such as a vendor’s ability to provide sufficient product to its customers, any documentation of concerns raised by Food and Drug Administration (FDA) inspections, quality and safety of the products, the source of raw materials, and bar code readability. A representative from another GPO said that the GPO considers the “total value” of a product or service for their customers, not necessarily solely the price. The total value includes, for example, product quality, upfront price, discounts, rebates, and anticipated administrative fee revenue. This representative said that in certain situations, such as with multiple possible suppliers of a product, a GPO customer would not necessarily want to purchase the product with the lowest price. Negotiate and award contracts. GPOs reported negotiating and awarding different types of contracts to vendors in different situations. All five of the GPOs in our review reported that the majority of the contracts they negotiate are either dual-source or multi-source, meaning that the majority of the products sold through their contracts have more than one vendor available on the GPOs’ contracts. In addition, all five GPOs reported that they did not bundle unrelated products, and awarded mostly contracts with 3-year terms in 2012. All five GPOs also reported including provisions in some contracts—referred to as commitment provisions—in which customers that purchase a certain percent of product volume receive a rebate or reduced price. For example, a vendor might offer greater discounts to GPO customers that purchase at least 80 percent of a certain group of products from that manufacturer. Commitment requirements can also be tiered, resulting in the opportunity for a customer to commit to different percentages of purchasing volume: the higher the percentage, the lower the price. Representatives from all five GPOs also reported that, in certain situations, they negotiated sole-source contracts, contracts that bundled related products, and long-term contracts of 5 years or more. All five GPOs in our review reported that their contracting practices have not changed much over time. Sole-source contracts: All five GPOs reported that they do negotiate sole-source contracts when it is advantageous to their customers, though some GPOs reported negotiating a higher proportion of sole- source contracts than others. One GPO said that about 18 percent of its customers’ spending through the GPO is through sole-source contracts. Three GPOs reported sole-source contracting for branded drugs and commodities, and four GPOs reported sole-source contracting for generic drugs, including generic injectable drugs. For example, one GPO reported that in 2012 it had sole-source contracts in effect for generic drugs including an oncology drug—oxaliplatin, and an antiviral—acyclovir. Representatives from this GPO reported taking a vendor’s performance and supply capacity into consideration when determining whether to sole-source contract with a vendor. For example, the representatives stated that the GPO no longer sole- source contracts with a vendor that had failed to comply with FDA standards. Representatives from one vendor stated that, as a result of recent drug shortages, some GPOs have developed a philosophy to contract with as many vendors as possible to ensure a continuous supply for their customers, but that other GPOs choose to contract with a limited number of vendors and hold those vendors accountable for supplying their customers. Contracts that bundle related products: Representatives from all five GPOs in our review reported negotiating contracts that offer discounts based on the purchase of bundled products, but restricting bundling to products that are used together or are otherwise related in order to create efficiencies and help standardize products for their customers. Several GPOs reported bundling related commodities, and one GPO reported bundling related branded pharmaceuticals. Representatives from one GPO stated that the GPO bundles related products in the same product category, such as intravenous (IV) sets and solutions, diapers and underpads for incontinence care, and mobility aids such as walkers, crutches, and canes. Representatives from another GPO stated, for example, that it negotiates bundled contracts for interventional coronary products including stents, balloons, catheters, and guide wires. In addition, another GPO reported that, in 2013, it implemented a program through which participating customers can standardize their purchases for up to 40 commodity categories for additional discounts. Long-term contracts: Representatives from all five GPOs reported awarding longer terms for certain types of products, such as IV systems and laboratory products. One GPO reported that its customers requested long-term contracts for IV systems because they found it difficult to switch IVs and pumps every 3 years, and one manufacturer we interviewed stated that the investment in time and money needed to train clinicians in how to use a brand of IV products makes it inconvenient and disruptive for hospitals to change these products. A representative from another GPO stated that they often negotiate longer-term contracts for chemistry analyzers and the specific reagents that are used with them, and had recently negotiated a 7-year contract for both the analyzers and reagents together. Finally, all 5 GPOs in our review provide a grievance process for vendors who are not awarded contracts. A representative of one GPO stated that, when vendors are not awarded a contract and want to know why, GPO staff debrief the vendor on how to make changes to increase their chances of being awarded a contract during the next RFP cycle. After this debrief, a representative of the GPO stated that vendors can file a formal grievance with the GPO. Another GPO posted on its website that any vendor may file a grievance within 30 days of the announcement of the contract award. The website states that the GPO will acknowledge receipt of the grievance immediately, and provide a detailed response within 90 days, including the GPO’s rationale for the final decision. In addition to each GPO’s separate grievance processes, HGPII—which GPOs formed in 2005 in order to promote best practices and public accountability among member GPOs—also has a formal grievance process that vendors may use to lodge complaints against GPOs. However, HGPII representatives told us that no complaints have been formally submitted. They explained that, while it is possible that there are no vendor complaints, they believe it is more likely that not enough vendors know about the grievance process. HGPII representatives stated that they have brought on board an in-house ethicist to review HGPII’s grievance process. The views of experts and others we interviewed on the effects of GPO contracting practices varied. For example, some experts and other stakeholders contend that GPOs’ contracting practices may result in a reduction in product innovation. Specifically, one expert said that if manufacturers believe that it is impossible to get onto a GPO contract, but that such a contract is necessary for market success, then manufacturers will not innovate and create new products. However, others we interviewed told us that GPO contracting practices do not block access to innovative products. For example, all 5 of the largest GPOs reported using a competitive bidding process as well as contract clauses that allow for innovative products to be placed on existing contracts. The GPOs in our review also reported participating in forums to help identify new, potentially innovative, products in the marketplace. However, they said vendors of products that are essentially the same as other products already on GPO contracts need to compete through the competitive bidding process for the opportunity to be awarded a contract. While officials from the FTC told us that they continue to receive complaints each year about the potential anticompetitive effects of GPO contracting practices—including complaints that GPOs have contributed to recent shortages of generic injectable drugs—in the last ten years, the FTC has not initiated any enforcement actions directed at GPO conduct. FTC staff explained that they have faced significant challenges in investigating allegations of anticompetitive behavior of GPOs due to a lack of data. They stated that there are a number of significant methodological challenges related to conducting a rigorous economic analysis of the GPO industry. In addition, a DOJ official told us that the agency has not He brought any actions or issued any guidance on GPOs since 2007.also stated that the DOJ has received one GPO-related complaint since 2012 when our most recent prior report was issued. The five GPOs in our review reported being predominately funded by administrative fees collected from vendors, and the experts’ views of the effects of this funding structure varied widely. In addition, the GPO funding structure may affect Medicare payments over time. The five GPOs in our review reported being predominately funded by administrative fees collected from vendors, which were almost always based on a percentage of the purchase price for products obtained through GPO contracts. GPOs use these fees to fund their operating expenses, including expenses related to contracting with vendors and providing additional services to their customers outside of group purchasing. On average, the five GPOs in our review reported that administrative fees collected from vendors accounted for about 92 percent of their revenue in 2012, ranging from a low of 83 percent to a high of 98 percent.average, 3.3 percent of their revenue from member fees, ranging from 0.2 percent to 12.1 percent. Member fees included, for example, fees that a GPO charged hospitals in exchange for membership in the GPO. The five GPOs also reported that revenue from outside investments accounted for, on average, 2.2 percent of their revenue in 2012. However, only two GPOs reported receiving this type of revenue, which accounted for 8.1 percent and 2.7 percent of their total revenue in 2012, respectively. This revenue included, for example, equity income from an ownership interest in another GPO. Finally, the GPOs reported receiving, on average, 0.6 percent of their revenue from other sources, ranging from 0 percent to 1.5 percent. This other revenue included, for example, In addition, these GPOs reported receiving, on vendor exhibit fees and conference fees. In addition to these sources of revenue, two of the five GPOs in our review offered private label programs to their hospital customers in 2012. Under these programs, vendors may pay the GPOs licensing fees—which are also based on a percentage of the purchase price of products—to market their products using the GPO’s brand name. On average, the 5 GPOs reported that licensing fees accounted for 2.2 percent of their revenue, though only two of the GPOs in our review collected licensing fees through private labeling programs in 2012. (See fig. 3) The GPOs in our review generally reported receiving more fees from vendors in 2012 than they did in 2008. Together, all five GPOs reported collecting a total of $2.3 billion in administrative and licensing fees from vendors in 2012.amount of fees collected from vendors in 2008, when adjusted for This represents a 20 percent increase in the total inflation. One GPO reported no change in the total amount of vendor fees collected between 2008 and 2012, but did report a 15 percent increase in its percentage of revenue from outside investments. The other four GPOs reported increases in the total amount of vendor fees collected between 2008 and 2012, ranging from 13 percent to 53 percent, when adjusted for inflation. GPO representatives told us there were many reasons for the growth in volume of fees collected, including increases in purchasing volume by customers and additional products being added to contracts. Although we requested this information for years prior to 2008, two of the five GPOs in our review reported that they were unable to provide it because they do not retain records for that long. All five GPOs in our review reported most frequently receiving administrative fees from vendors that were at or below 3 percent, although the two GPOs with private-label programs reported also receiving licensing fees from vendors of products sold under the GPOs’ brand names in addition to administrative fees. All five GPOs in our review reported that the most frequent vendor fee they received in 2012 was 3 percent. In addition, all five GPOs reported average fees received in 2012, weighted by purchasing volume, of around 1 to 2 percent. This average includes fees from distributors and manufacturers. Because fees from distributors are often less than 1 percent, average fees from manufacturers are likely to be higher than the 1 to 2 percent overall average. In addition, the three GPOs without private-label programs in 2012 reported that the highest vendor fee they received that year was 3 percent. The administrative fee percentages that GPOs reported receiving in 2012 are consistent with the levels that the GPOs reported for 2008. The two GPOs with private-label programs in 2012 reported that their highest fees—9.9 and 11.12 percent—were for products sold through their private-label programs and included both an administrative fee as well as a licensing fee for the GPO to market the products to their customers. Representatives from the GPO that reported the fee of 9.9 percent stated that this was for a brand name drug with a variable fee based on the vendor’s sales volume—the vendor was willing to pay a higher fee in exchange for the GPO’s customers pre-ordering the drug. Representatives from the GPO that reported the 11.12 percent fee stated that the fee was negotiated with a vendor that supplied five generic drugs through the GPO’s private-label program. Average fee percentages, weighted by purchasing volume, that GPOs reported receiving in 2012 were generally consistent across different categories of products, but there were some small differences. For example, fees for branded drugs were generally lower than for generic drugs—average fees for branded drugs ranged from 0.86 percent to 2.08 percent, while average fees for generic drugs ranged from 1.31 percent to 3.62 percent. Four of the 5 GPOs reported that, of the total amount of vendor fees they received in 2012, on average, 25 percent were for commodities, 15 percent were for devices, 12 percent were for brand name drugs, and 8 percent were for generic drugs. The remaining 41 percent were for other products and services, such as capital equipment and food service. The fifth GPO in our review was unable to report information separately for devices and commodities. The literature we reviewed and the views of experts we interviewed varied widely on the effects of the GPO funding structure, specifically the reliance on vendor fees. Some of the literature we reviewed and experts we interviewed asserted that the vendor fee-based funding structure of GPOs creates misaligned incentives for them to negotiate higher prices for medical products in order to increase the amount of vendor fees that that they receive. Several experts that we interviewed stated that, based on economic theory, the GPO funding structure creates a principal-agent problem, in which the GPOs are motivated to act in their own best interests, rather than the best interests of their customers. These experts argued that because the GPOs’ compensation increases as prices increase, the GPOs have little incentive to negotiate lower prices, even though their customers would benefit from lower prices. Therefore, GPOs may place greater weight on the administrative and other fees than the prices of products and services for their customers. According to these experts, this funding structure—which allows vendors to pay administrative fees to GPOs—distorts the bidding process and results in inflated prices for hospitals relative to a funding structure where these administrative fees are not allowed. Other people we interviewed—including some experts and representatives of the GPOs—stated that competition between GPOs to retain their customers incentivizes them to negotiate the lowest possible prices, and mitigates any theoretical principal-agent problem. They explained that hospitals can switch GPOs anytime if they are not satisfied with the prices that a GPO is negotiating. Representatives from one hospital said that hospitals switch GPOs when they merge with larger systems, but that there are significant costs related to the conversion. Several experts reported that not only are the largest national GPOs in intense competition with each other, they are in competition for purchases made directly from manufacturers, as well as through regional GPOs, and hospital and health system alliances. Specifically, one expert we interviewed stated that GPO customers often obtain pricing information from all possible sources and then selectively choose products and services they can obtain for the best prices. Another expert told us that the percentage-based administrative fee structure works well because GPOs are only compensated for the sales that are made. Although some experts have reported potential effects of the GPO funding structure, empirical data on the effects are limited. We identified one study that presented empirical data on the effects of the vendor-fee- based GPO funding structure. The authors of this study concluded that, if the GPO safe harbor provision were eliminated, then GPOs “would likely structure their procurement process in a way that elicited more competitive bidding, resulting in lower prices and greater competition.” In addition, the authors concluded that altering the GPO funding structure would not eliminate any efficiencies that GPOs currently offer, such as reduced transaction costs or consolidated buying power. We also found other studies that presented empirical data focused more broadly on the value of GPOs, such as studies that focused on whether GPOs save their customers money. However, these studies did not include empirical evidence that directly addressed the effects of the GPO funding structure. The GPO funding structure may affect Medicare payments over time. To the extent that the vendor-fee-based funding structure affects prices for medical products and services—either by reducing or inflating the costs of the products and services—Medicare payment rates may be affected over time through the annual update to the Prospective Payment System hospital payment rates. According to HHS, these updates rely, in part, on information reported by hospitals on their Medicare cost reports, which reflect the hospitals’ costs of medical supplies, including those purchased through GPOs. Moreover, Medicare payments could be affected if hospitals do not appropriately account for any revenues they receive from GPOs. These revenues are required to be reported as a reduction in costs on hospitals’ costs reports. All five GPOs in our review reported passing a percentage of the administrative fees—in some cases, the majority of fees collected from vendors—on to their customers or owners in 2012. All five GPOs reported sharing with their customers or owners between 37.6 percent and 100 percent of the total administrative fees they received in 2012—a total of $1.6 billion. This represents 70 percent of the $2.3 billion in administrative fees collected in 2012. The amount distributed to customers and owners ranged from $54 million to $472 million per GPO. To the extent that administrative fee revenue is not reflected on cost reports, Medicare could be overpaying hospitals. The extent to which hospitals are reporting this additional revenue is not known because HHS-OIG has not reviewed cost reports for this information since 2005. In addition, CMS officials told us that the agency has not specifically identified this as information that should be routinely audited by Medicare Audit Contractors. Some experts that we interviewed stated that the potential effects of the GPO funding structure on Medicare payment rates could be eliminated if the GPO safe harbor were repealed and GPOs were no longer permitted to collect fees from vendors. However, experts and representatives from vendors, GPOs, and hospitals we interviewed stated that there would be a disruption to hospitals and vendors while they transitioned to a new supply chain model. Others we interviewed—including GPO representatives—told us that if the safe harbor were repealed, GPOs would eventually cease to exist because hospitals would not be able to afford to pay the fees. However, some hospitals already pay directly for access to contracts to supplement their existing contracting arrangements with their GPOs. For example, a wholly-owned subsidiary of one large, national GPO charges its customers a $50,000 a year subscription fee for access to a web-based system for viewing hospital supply prices, negotiating contracts with vendors directly, and tracking their purchases and contracts online. The company has reported more than $10 billion in purchasing power from a user base of 600 hospitals in its first year. Finally, others stated that, if the safe harbor were repealed, smaller hospitals might have more difficulty adjusting and may be more likely to merge with larger hospital systems. Congress passed the GPO safe harbor provision because it believed that GPOs could help reduce health care costs by enabling hospitals to obtain volume discounts from vendors. However, the GPO funding structure protected under the safe harbor—specifically, the payment of administrative fees by vendors based on a percentage of the cost of the products or services—raises questions about whether GPOs are actually negotiating the lowest prices. Some experts believe there is an incentive for GPOs to negotiate higher prices for products and services because GPO compensation increases as prices increase. However, other experts, as well as GPOs, stated that there is sufficient competition between them to mitigate any potential conflicts of interest. Almost 30 years after its passage, there is little empirical evidence to definitively assess the impact of the vendor-fee-based funding structure protected under the safe harbor. While repealing the safe harbor could eliminate misaligned incentives, most agree there would be a disruption while hospitals and vendors transitioned to new arrangements. Over the longer term, if the current trend of hospital consolidation continues, the concerns about these disruptions may be diminished to the extent that large hospital systems may be in a better position to pay GPOs directly for their services or negotiate contracts with vendors on their own. Furthermore, given that some hospitals are already paying a subsidiary of one GPO directly for access to vendor contracts, alternative approaches are possible. Despite the limited evidence on the impact of the vendor-fee-based funding structure protected under the safe harbor, there is a potential impact on the Medicare program. To the extent that the funding structure has the potential to affect the costs of products and services, periodic updates of Medicare’s payment rates will incorporate these costs over time. Additionally, GPOs distribute to their owners and customers— mostly hospitals—a percentage of the administrative fees they collect from vendors, in some cases the majority of such fees. Hospitals are required by federal law to account for this revenue in reports to Medicare, but that has not always occurred. In 2005, HHS-OIG found that some GPO customers did not fully account for GPO revenue distributions on their Medicare cost reports. Subsequently, CMS issued updated guidance specifying that these distributions must be properly reported, but HHS has not reviewed cost reports for this information since then. While a repeal of the safe harbor provision would require a clearer understanding of the impact of the GPO funding structure, hospitals’ potential underreporting of administrative fee revenue presents an immediate risk that can be addressed within the current GPO funding structure. To help ensure the accuracy of Medicare’s payments to hospitals, we recommend that the Secretary of the Department of Health and Human Services determine whether hospitals are appropriately reporting administrative fee revenues on their Medicare cost reports and take steps to address any under-reporting that may be found. We provided a draft of this report to HHS, FTC, and DOJ for comment. In its written response, reproduced in appendix II, HHS agreed with our recommendation, and stated that it will add steps to its process for auditing hospitals’ cost reports so that contractors may review administrative fee revenues that hospitals receive from GPOs. We received technical comments from HHS, FTC, and DOJ which we incorporated as appropriate. We also received comments on a draft of this report from the 5 GPOs in our review and from HSCA. Many of the comments we received were similar and include the following: Some of the GPOs and HSCA noted that they were concerned that the draft title was not consistent with the content of the report. We reconsidered this title in light of their concerns and believe the revised title—Group Purchasing Organizations: Funding Structure Has Potential Implications for Medicare Costs—addresses their concerns, but is still consistent with the findings of the report. Some of the GPOs and HSCA disagreed with the draft report’s characterization that repeal of the safe harbor would have potential short-term disruption on the supply chain, stating that there would be significant market disruption that could result in higher healthcare costs. The draft report included statements we obtained from the GPOs—as well as experts and others—on the potential impact of eliminating the safe harbor. However, the draft report did not include a recommendation to repeal the safe harbor, noting that there is limited empirical evidence to definitively assess the impact of the vendor fee- based funding structure protected under the safe harbor. Some of the GPOs commented that the example about a subsidiary of a GPO with an alternative funding structure does not indicate that a model like this could support the entire industry if the safe harbor were repealed. The draft report only describes this as one possible example, and we added additional context to the report to clarify this point. Some of the GPOs and HSCA noted that there is currently no evidence that hospitals are not appropriately accounting for revenue received from GPOs on their cost reports and that GAO did not consider the findings of the 2005 HHS-OIG audit reports. However, we did consider the 2005 audit findings, and we added additional detail on them to the report. As noted in the draft report, the HHS-OIG recommended in 2005 that CMS provide specific guidance on the proper treatment of revenue distributions and, in 2011, CMS issued updated guidance on this issue. Since that updated guidance, HHS has not assessed whether revenues from GPOs are being appropriately accounted for. Some of the GPOs and HSCA noted that our draft report did not explain the reasons for the 20 percent increase in GPO administrative fees between 2008 and 2012. We added a statement to the report to describe the reasons why the total volume of fees may have increased, such as increased customer purchasing volume. In addition, the draft report examined changes in the percentage of fees collected, noting that these were generally consistent over this 4-year period. Some of the GPOs and HSCA stated that the draft report did not explain the full set of benefits of the GPO industry. We added some additional information to the report to more fully describe the activities and reported benefits of GPOs and how they serve hospitals or other providers. However, the scope of this report is focused on GPO contracting practices and funding structure. In a prior report, we described the services offered by GPOs and that work is referenced in this report. (See GAO-10-738). Some of the GPOs and HSCA commented that in describing the literature on the GPO funding structure, we do not include a discussion of any of the independent and industry funded studies on the impact of GPOs. As we state in the report, while we identified other studies that presented empirical data focused more broadly on the value of GPOs, these studies did not include evidence that directly addressed the effects of the GPO funding structure. In addition, some GPOs and HSCA noted that the study described in our report was funded by the Medical Device Manufacturers Association (MDMA). We added a note to the report that explains that MDMA provided funding for the author to purchase the data used in this study. Some of the GPOs raised concerns about the sample size and selection of vendors and hospitals we interviewed and stated that a broader sample of vendors and hospitals is necessary to maintain a more meaningful representation of their points of view. The information we obtained from hospitals and vendors was used to provide context and examples. We added a statement to the report to note that this information was not generalizable. Some of the GPOs commented that the description of FTC complaints is incomplete. We report FTC’s comments on this matter and this report has been reviewed by FTC. Some GPOs commented that the draft report did not include a description of the GPO governance process or advisory board decision making. We added this information to the report. We also received technical comments from the GPOs and HSCA which we incorporated as appropriate. As agreed with your offices, unless you announce the contents of this report earlier, we plan no further distribution of it until 30 days from the report date. At that time, we will send copies of this report to the Secretary of the Department of Health and Human Services, the Attorney General, the Chairman of the Federal Trade Commission, and appropriate congressional committees. The report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact Linda T. Kohn at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Other major contributors to this report are listed in appendix III. Our objectives were to describe (1) Group Purchasing Organization (GPO) contracting practices and the reported effects of these practices; (2) how GPOs are funded and the reported effects of this funding structure. To address these objectives, we sent a questionnaire to the five largest national GPOs by purchasing volume: Amerinet, HealthTrust Purchasing Group, MedAssets, Novation, and Premier. We asked about their contracting practices and sources of revenue, including administrative fees collected from vendors. We fielded the questionnaire from December 2013 through March 2014. One GPO did not provide answers through our web-based questionnaire. Rather, the GPO provided a separate document with answers to some of the questions, and sometimes in a different format than was requested. We clarified this GPO’s responses through follow-up questions. Through our questionnaire, we asked about each GPO’s purchasing volume by fiscal year, from 2000-2012; purchasing volume by category of medical product for fiscal years 2000, 2004, 2008, and 2012; average, highest, lowest, and most frequent administrative fee percentages received in fiscal year 2012, by category of medical product; total dollar amount of administrative fees received in fiscal years 2000, 2004, 2008, 2012, by category of medical product; average, highest, lowest, and most frequent licensing fee percentages received in fiscal year 2012, by category of medical product; total dollar amount of licensing fees received in fiscal years 2000, 2004, 2008, 2012, by category of medical product; average, highest, lowest, and most frequent fee percentages for any fee that was based on a percentage of the purchasing price of a product in fiscal year 2012, by category of medical product; total dollar amount of total fees based on a percentage of the purchasing price of a product received in fiscal years 2000, 2004, 2008, and 2012, by category of medical product; total dollar amount of total fees based on the purchasing price of a product received in fiscal years 2000-2012; average, highest, lowest, and most frequent administrative fee percentages received in fiscal year 2012 for generic injectable drugs; average, highest, lowest, and most frequent licensing fee percentages received in fiscal year 2012 for generic injectable drugs; total dollar amount and percentage of administrative fees shared with customers and owners in fiscal year 2012; sources of revenue in fiscal year 2012; services provided to customers in fiscal year 2012 and how those services were funded; whether the GPO awarded, or had in effect, any sole-source, bundled, non-bid, or long-term contracts with vendors; and key ways that GPOs bring value to their customers. We reported only the information that was consistently reported by most of the GPOs in our review. There were several questions that some GPOs did not answer, or did not answer completely, including, for example, the following: For questions requesting information over time, only two of the GPOs reported information for the entire time period. One GPO was able to report information back to fiscal year 2003, and two other GPOs were only able to report information back to fiscal year 2008. Representatives from both of these GPOs stated that their records retention policies prevented them from obtaining data before fiscal year 2008. For questions requesting information to be broken into multiple product categories, one GPO was unable to separately report information for medical devices and commodities. As a result, this GPO reported information for both categories combined. Another GPO reported that, for the purposes of the questionnaire, the GPO only considered cardiac and orthopedic products to be “devices.” Other products considered to be devices by the Food and Drug Administration (FDA) were included in either the commodities or “other” categories. In addition, we interviewed representatives with knowledge of GPOs, including the five largest GPOs to clarify their questionnaire responses, and discuss their contracting practices, funding structure, and the GPO safe harbor provision in more depth. two regional GPOs about how they work with the larger, national GPOs: Greater New York Hospital Association Services, Inc., and APS Healthcare. purchasing departments of five hospitals and hospital systems about how the hospitals purchase medical products, the extent of the hospitals’ use of GPOs, additional services and total value they receive from their GPOs, and potential impacts on hospitals if the GPO safe harbor provision were repealed: the Dana Farber Cancer Institute, Mt. Sinai Medical Center, the University of Pittsburgh Medical Center, BJC Healthcare, and Intermountain Healthcare. We selected hospitals based on variation in: number of hospital beds, the extent to which the hospital had an ownership interest in a GPO, and which GPOs they used. eight vendors of medical products about GPO contracting practices, funding structure, and the GPO safe harbor provision: 3M; ICU Medical; Alcon; Teva Pharmaceutical Industries, Ltd.; Hospira; Fresenius Kabi USA; GlaxoSmithKline; AADCO Medical, Inc. We selected vendors based on variation in the types of products manufactured. trade associations representing GPOs and vendors of medical products about their members’ relationships with GPOs, GPO contracting practices and funding structure, and the GPO safe harbor provision: Healthcare Supply Chain Association, Health Industry Distributors Association, Advanced Medical Technology Association, Medical Device Manufacturers Association, Generic Pharmaceutical Association. In addition, to determine the reported effects of the GPO funding structure, we interviewed thirteen experts in economics, the healthcare market, and purchasing cooperatives. We identified these experts through our search of the relevant literature on GPOs, healthcare markets, purchasing cooperatives, and economics: David Balto, Roger Blair, Lawton Burns, Einer Elhauge, Adam Fein, Herbert Hovenkamp, Michael Lindsay, Diana Moss, Eugene Schneller, LeRoy Schwartz, Prakash Sethi, Hal Singer, Dave Swanson. Finally, we interviewed Federal Trade Commission (FTC), Department of Justice (DOJ) and Health and Human Services (HHS) officials about their oversight of GPOs, including complaints they had received about GPOs and any investigations they had opened or actions they had taken against GPOs since our 2012 report. To identify literature on the effect of the GPO funding structure, we conducted a literature review. To conduct this review, we searched 28 bibliographic databases, such as ProQuest and MEDLINE, for articles published between January 2004 and June 2014. In our search, we used a combination of search terms such as “group purchasing” and “health care.” We considered an article relevant to our review if it discussed the potential effects of the GPO funding structure. Using the articles we identified as relevant to our review, we then determined which of these articles included the results of empirical analyses. To confirm that our search captured all of the relevant literature that met our criteria, we reviewed the bibliographies of the relevant articles to identify other potentially relevant studies. We did not assess the methodologies of the studies we identified or review the reliability of the data used in these studies. In addition, we reviewed documentary evidence of the factors that GPOs consider when contracting for products and services, including scorecards, spreadsheets, and other templates provided by the GPOs. We reviewed published articles in economic and law journals, as well as analyses of the healthcare market. We also reviewed laws, legislative history, regulations and guidance related to the GPO safe harbor. In addition to the contact named above, Kristi Peterson, Assistant Director; Kelly DeMots; Leia Dickerson; Sandra George; and Yesook Merrill made key contributions to this report.
GPOs are purchasing intermediaries that negotiate contracts for medical products and services. GPOs contract with vendors and receive a fee from them when providers purchase from the vendor. These fees are a source of operating revenue for GPOs, and they are allowed to collect them if they meet the requirements of a safe harbor to the “anti-kickback” provision of the Social Security Act—known as the Anti-Kickback statute—which would otherwise prohibit such fees. You raised questions about GPOs' contracting practices and about the impact of the GPO funding structure. This report examines (1) GPO contracting practices and the reported effects of these practices; (2) how GPOs are funded and the reported effects of this funding structure. To do this work, GAO sent a questionnaire to representatives of the 5 largest national GPOs about their contracting practices and sources of revenue; reviewed the literature on the effects of the GPO funding structure; reviewed laws, regulations, and guidance on the GPO safe harbor; interviewed representatives from HHS, FTC, the Department of Justice (DOJ), vendors, hospitals, trade associations, and economic and health care experts. According to representatives from the 5 large group purchasing organizations (GPO) in GAO's review, GPO contracting generally involves three phases: (1) issue requests for proposals or invitations for vendors to competitively bid for a contract, (2) review proposals, and (3) negotiate and award contracts. GPOs reported negotiating and awarding different types of contracts to vendors in different situations. All 5 GPOs reported that the majority of the contracts they negotiate are either dual-source or multi-source, meaning that the majority of the products sold through their contracts have more than one vendor available on the GPOs' contracts. In addition, all GPOs reported that they did not bundle unrelated products and awarded mostly contracts with 3-year terms in 2012. The views of experts and others GAO interviewed on the effects of GPO contracting practices varied on issues such as whether the practices affect product innovation. In addition, while officials from the Federal Trade Commission (FTC) stated that they continue to receive and review complaints each year about GPO contracting practices, in the last 10 years, the FTC has not initiated any enforcement actions directed at GPO conduct. The 5 GPOs in GAO's review reported being predominately funded by administrative fees collected from vendors, which were almost always based on a percentage of the purchase price of products obtained through GPO contracts. The 5 GPOs reported that these fees totaled about $2.3 billion in 2012, and nearly 70 percent of these fees were passed on to GPO customers or owners. The literature and the views of experts varied widely on the effects of this funding structure. Some suggested it creates misaligned incentives for GPOs to negotiate higher prices for medical products in order to increase the amount of vendor fees that they receive. Others suggested that competition between GPOs incentivizes them to negotiate the lowest possible prices, and mitigates these concerns. There is little empirical evidence available to either support or refute these concerns. However, to the extent that the vendor fee-based funding structure affects prices for medical products and services, Medicare payment rates may be affected over time through the annual update to hospital payment rates, which relies, in part, on information that hospitals report to the Centers for Medicare & Medicaid Services (CMS)—an agency in the Department of Health and Human Services (HHS). Moreover, Medicare payments also could be affected if hospitals do not account for revenue they receive from GPOs, which they are required to report as a reduction in costs on their cost reports. However, the extent to which hospitals are reporting this revenue is not known because this has not been reviewed by HHS since 2005, and CMS officials stated that the agency has not specifically identified this as information that should be routinely audited. Repealing the safe harbor—which allows administrative fees—could eliminate the potential effects of the GPO funding structure on Medicare payment rates, but experts and others stated that this could be disruptive to the health care supply chain at least in the near term. Over the longer term, GPOs and hospital systems are likely to adapt to the new market environment. While a repeal of the safe harbor provision would require a clearer understanding of the impact of the GPO funding structure, hospitals' potential underreporting of administrative fee revenue presents an immediate risk that can be addressed within the current GPO funding structure. GAO recommends that the Secretary of HHS determine whether hospitals are appropriately reporting administrative fee revenues on their Medicare cost reports and take steps to address any under-reporting that may be found. HHS agreed with the recommendation. GAO also incorporated technical comments from HHS, FTC, DOJ, and GPOs.
The UI program was established by Title III of the Social Security Act in 1935 and is a key component in ensuring the financial security of America’s workforce. The program, which is administered by the states with oversight from Labor’s Employment and Training Administration (ETA), provides temporary cash benefits to workers who lose their jobs through no fault of their own. Today UI coverage is nearly universal, extending to almost all wage and salaried workers. To help claimants become reemployed, employment and training assistance is provided through a number of federal programs, including Wagner-Peyser Employment Service, WIA Adult, WIA Dislocated Worker, and TAA programs. The UI program is funded by federal and state taxes levied on employers. The states collect the portion of the tax needed to pay UI benefits, while the federal tax finances state and federal administrative costs and other related federal costs. Labor holds these funds in trust on behalf of the states in the Unemployment Trust Fund. In fiscal year 2004, Congress authorized about $2.6 billion to states to administer their programs. Labor is responsible for overseeing the UI program to ensure that the states operate an effective and efficient UI program. Labor is also responsible for monitoring state operations and procedures, providing technical assistance and training, as well as analyzing UI program data to diagnose potential problems. Although Labor provides oversight and guidance to ensure that each state operates its program consistent with federal guidelines, the federal-state structure of UI places primary responsibility for administering the program on the states. The states have wide latitude to administer their UI programs in a manner that best suits their needs within these guidelines. States establish initial eligibility requirements to determine which unemployed workers are qualified to start collecting UI benefits. These requirements seek to ensure that an unemployed worker has had sufficient employment experience to qualify for UI benefits (known as the monetary eligibility requirements), and to determine whether the worker lost the job through no fault of his or her own (the nonmonetary eligibility requirements). State claims representatives are responsible for determining each claimant’s initial eligibility for UI benefits by gathering and (when possible) verifying important information, such as identity, employment history, why the claimants is no longer working, and other sources of income the claimant may have. Once the claim has been submitted for processing, the state sends forms to the claimant’s employer(s) requesting them to verify the claimant’s wages and the reason the claimant is no longer working. If the individual’s claim for UI is approved, the state then determines the amount of UI benefits, depending on the individual’s earnings during the period upon which the claim is based and other factors. In general, most states are expected to provide the first benefits to the claimant within 21 days of the date the state determined that the claimant was entitled to benefits. In order to remain eligible for benefits on a continuing basis, claimants must also demonstrate that they are able to work and available for work and are still unemployed. Claimants must submit this certification of continuing eligibility—by mail, telephone, or Internet, depending on the state—throughout the benefit period. This practice is usually done weekly or biweekly. States may continue to monitor claimant eligibility through an eligibility review program, in which certain claimants are periodically contacted to review their eligibility for benefits, work search activities, and reemployment needs. Typically, the maximum duration of benefits is 26 weeks. In November 1993, Congress enacted legislation amending the Social Security Act to require that each state establish a Worker Profiling and Reemployment Services (WPRS) system and implement a process typically referred to as claimant profiling. The claimant profiling process uses a statistical model or characteristics screen to identify claimants who are likely to exhaust their UI benefits before finding work. Claimants identified through this process are then referred to reemployment services while they are still early in their claim. For profiled claimants, participation in designated reemployment services becomes an additional requirement for continuing eligibility for UI benefits. To assist states in implementing WPRS, Labor developed a prototype model for determining the probability that claimants will exhaust their benefits based on a set of five claimant characteristics: education, job tenure, industry, occupation, and the local unemployment rate. While some states have included only these five variables in their profiling models, others have used this prototype as a benchmark and have included additional variables, such as the claimant’s pre-unemployment earnings, weekly benefit amount, UI wage replacement rate, potential duration of UI benefits, delay in filing, and the ratio of quarterly earnings to earnings in the base year. Reemployment services for UI claimants are usually delivered by a range of federally funded employment and training programs, often through consolidated service delivery structures called one-stop centers. When it was passed in 1998, the WIA began requiring that about 17 federal employment and training programs, including UI, provide services through the one-stop system. WIA allows local areas considerable flexibility in how these programs provide services through the system, so the degree of connection throughout the one-stop system between UI and other workforce programs can vary widely by state and local area. Among the many federal workforce programs that may provide reemployment services to UI claimants, four programs funded by Labor are most likely to serve UI claimants: Employment Service, WIA Adult program, WIA Dislocated Worker program, and TAA. All four of these are required to be part of the one-stop system, and each has its own performance reporting requirements. Employment Service. The Employment Service was created in 1933 by the Wagner-Peyser Act, making labor exchange services—that link job seekers with job opportunities—universally available to employers and job seekers alike without charges or conditions. Historically, many states colocated local Employment Service and UI offices so that when UI claimants applied for benefits at Employment Service offices, they would be exposed to employment services. Today, states’ labor exchanges typically involve online databases where job seekers can look for work and apply for jobs, and where employers can post jobs and recruit employees. In addition, Employment Service offers a range of services to job seekers, including job search assistance, job referral, placement assistance, assessment, counseling, and testing. Employment Service also offers a number of services to employers, including taking job orders, recruitment, screening, referrals of job seekers, assisting with job restructuring, and helping employers manage layoffs. WIA Adult and WIA Dislocated Worker Programs. When WIA was enacted in 1998, it replaced the Job Training Partnership Act (JTPA) programs for economically disadvantaged adults and youth and for dislocated workers with three new programs—WIA Adult, Dislocated Worker, and Youth— that provide a broader range of services to the general public, no longer using income to determine eligibility for all program services. WIA programs provide for three tiers, or levels, of service for adults and dislocated workers: core, intensive, and training. Core services include basic services such as job searches and labor market information. These activities may be self-service or require some staff assistance. Intensive services include such activities as comprehensive assessment and case management—activities that require greater staff involvement. Training services include such activities as occupational skills or on-the-job training. Labor’s guidance provides for monitoring and tracking of performance for the adult and dislocated worker programs to begin when job seekers receive core services that require significant staff assistance. WIA currently excludes job seekers who receive core services that are self-service and informational in nature from being included in the performance measures. Trade Adjustment Assistance. To assist workers who are laid off as a result of international trade, the Trade Expansion Act of 1962 created the Trade Adjustment Assistance program. Historically, the primary benefits available through the program have been extended income support and training. Participants are generally entitled to income support, but the amount of funds available for training is limited by statute. Congress has amended the TAA program a number of times since its inception. Amendments to the TAA program in the TAA Reform Act of 2002 extended income support to 78 weeks after exhausting UI benefits (plus 26 more weeks if participating in and completing remedial training) and added new health coverage assistance and wage insurance benefits for older workers. To promote reemployment through the one-stop system, Congress appropriated $35 million a year beginning in 2001 for Reemployment Services Grants specifically to provide reemployment services for UI claimants. Each year Labor has provided a minimum of $215,000 to each state, with the remainder of the $35 million distributed according to the share of each state’s first payments to UI claimants in the previous fiscal year. These funds are authorized under Wagner-Peyser, and services are generally delivered by state Employment Service staff. Labor issued guidance to the states to use the funds to enhance the quality and quantity of services that UI claimants receive within the one-stop system, encouraging states to use the funds to provide direct services to UI claimants. Nearly all states accept most initial UI claims remotely by telephone, Internet, or both. Even though claimants filing remotely no longer have face-to-face contact with UI staff at the time the claim is filed, all states told us they have found ways to provide information on eligibility requirements and reemployment services to individuals filing initial claims, often beginning at the time the claim is filed. Most states told us that the shift to remote claims did not diminish their ability to provide information on reemployment services to claimants and, in some cases, had improved customer service and helped ensure that claimants received consistent information. Forty-four states accept initial claims for unemployment insurance by telephone or the Internet, based on our telephone interviews with state officials. Of these states, 29 use both remote filing methods, while 10 accept claims only over the telephone and 5 take them only by Internet (see fig. 1). In most states with telephone claims, claimants speak with customer service representatives, although in 6 states claimants may use an automated voice response system to complete their claim. In some states with such automated systems, it is possible for a claimant to file an initial claim without speaking to anyone. However, if problems occur during the process, callers can be transferred to remote claims centers where a service representative works with them to complete the claim. Of the 6 states that did not accept remote claims, 3 said they plan to begin accepting initial claims by Internet or telephone in the future. The remaining 3 states still require claimants to file for unemployment insurance in person. For example, Georgia officials told us that they do not currently allow claimants to file initial UI claims remotely, preferring instead to have claimants file in person at workforce centers, where they typically file their claims using the state’s private computer network. (See app. II for more information on which states take initial UI claims remotely by telephone or Internet.) In addition to accepting initial claims by telephone or Internet, some states also reported using other remote filing methods. Officials in several states told us that they accept claims submitted directly by employers, but the role of employer-filed claims differs from state to state. For example, Michigan officials told us they had established an employer-filed claim process in which employers with more than 1,000 layoffs in 3 consecutive years must file employee claims electronically. With an employer-filed claim, the claimant is not involved in filing the initial claim but still must certify and file for continuing claims. State officials told us that these claims simplify the filing process in cases of large layoffs and help workers receive their UI benefit checks more quickly. They said their goal was to have 20 to 25 percent of all initial claims filed through this method. In general, however, employer-filed claims are used for mass layoffs or seasonal shutdowns. Most states that accept remote claims also allow claimants to file their initial claims at a one-stop center, either by using available telephones— sometimes with a direct telephone link to a call center— or by using on site computer resources to access the Internet. In Washington, for example, individuals who come to the state’s one-stop centers are directed to file by Internet or by phone at an on site kiosk. These kiosks, which the state has placed in most of its one-stop centers, provide a direct connection to a call center and display UI program information to help claimants understand the process. In addition, at least 8 states told us that they have staff at the one-stop who can take claims or assist claimants in filing their claims. In all states that accept claims remotely, officials told us they have found ways to provide information during the claims filing process on requirements that claimants must meet to maintain their eligibility for unemployment benefits. At the same time, they also told us they provide information on how to access reemployment services to help claimants get back to work. Among the 39 states that allow filing by telephone, the methods they use to notify claimants of their work search requirements and available services vary. For eligibility requirements, most of the 39 states explain program rules over the telephone, most often during the initial call. For example, UI call center representatives in Washington give initial claimants information on their responsibility to search for work, the penalty for failure to do so, the location of the nearest one-stop center, and the types of services claimants could receive at the one-stop. In addition, a few states direct telephone claimants to a Web page where they can find information on work search requirements and how to certify and file for continuing claims. For reemployment services, all 39 states that accept initial claims by telephone reported that all or most of their telephone filers are provided information about these services, and approximately two- thirds of the states provide some of this information to their claimants during the initial call. Telephone filers in over 20 states are also directed to the one-stop system, through information that is either provided during the telephone call or sent to claimants later by mail. In Maryland, for example, officials told us that they inform claimants about reemployment services during the initial call and provide directions to the one-stop centers or Employment Service offices. Additionally, a few states have one-stop staff follow up with claimants to inform them of available services. The 34 states that allow remote filing by Internet also have a variety of methods for notifying claimants about their work search requirements and available services. For work search requirements, more than three-fourths of these states reported that such information was available on a Web page that claimants could access while filing their claims. In over 20 of those states, individuals were required to go through an Internet page on UI program rules in order to complete their claims. Some states provided this information through a link on the Web page but did not require claimants to access that page at the time they filed their initial claims. For reemployment services, all 34 states reported that all or most of their Internet filers are provided information about these services. Over three-fourths of those states provide some information to their claimants during the initial online filing process, although claimants may or may not be required to view this information to complete the claim. Almost half of the states that take initial claims by Internet told us they require claimants to access a document with reemployment services information before their claim is complete. Additionally, many told us that a link to this information is provided but claimants are not required to access the document in order to complete the claim. In some states, call center and one-stop staff may also contact claimants with information on how they may obtain services. For example, Virginia officials said the state runs a daily report of Internet claims filed, and call center representatives then call or e-mail a majority of those claimants to tell them about job seeker services and work search requirements. In addition to the information that remote filers receive over the phone or on a Web page, most of the 44 states that accept remote claims also mail claimants information on their responsibilities and available services. In Maryland, for example, officials told us that, as part of the claims filing process, claims center staff inform telephone filers of the work search requirement and the implications for not meeting it as well as the location of the Employment Service offices. After the claim is filed, all claimants are sent mailings that address UI and work search requirements and that provide directions to the one-stops and Employment Service offices. In Washington, everyone who files a claim receives a copy of the state’s unemployment claims kit, which contains information on claimant responsibilities as well as on reemployment services, online resources, and one-stop and employment services center locations. Officials in 32 of the 44 states told us that in their opinion the shift to remote claims did not diminish their ability to provide information on reemployment services to claimants. Officials in at least 7 of the states that have established remote filing methods said they had faced challenges in maintaining the connections between UI claimants and the reemployment services available to them. For example, some states said that staff providing reemployment services had less initial contact with job seekers, who may wait several weeks before seeking out more information about services available to them. However, officials in almost three-quarters of the 44 states told us they thought the shift to remote claims either had no negative impact or had improved their ability to deliver reemployment services to UI claimants. Officials in some states reported that providing reemployment services in a remote claims environment proved more difficult at first. However, once they had completed the transition, they said they have perceived no negative impact on the linkages between UI and reemployment services. Officials generally cited benefits that included improved customer service, more consistent information for claimants, and the ability of states to focus their resources on providing reemployment services to claimants. Several officials told us that they believed one benefit from the shift to remote claims was improved customer service. Claimants no longer needed to drive the sometimes great distances or wait around for hours just to file a claim. In addition, some states reported that it was easier to get information about services to claimants. Additionally, some officials told us that they thought the use of remote claims had helped ensure that claimants received consistent information. Several states, for example, reported that using scripts for telephone customer service representatives or screens of information for Internet filers helps ensure that all claimants are told the same thing. Some states said the transition to remote claims had enabled them to shift their focus from filing claims to providing services, and had reduced claims processing time. For example, officials in one state told us that some positive effects of using the Internet were that claims were processed more quickly, documentation was easily retrieved, and papers were not moving between offices. Across states, UI claimants have access to a variety of reemployment services, and states make use of UI program requirements to connect claimants with available services at various points in their claim. All federally approved state UI programs must include able-to-work and available-for-work requirements that claimants must meet in order to receive benefits. In many states, these requirements also serve to link claimants to reemployment opportunities and services. In addition, states provide targeted reemployment services to particular groups of UI claimants. The federal requirement of claimant profiling is typically the primary mechanism for targeting reemployment services to claimants. UI claimants have access to the range of reemployment services available to all job seekers through the one-stop system. Officials in all states, for example, told us that claimants can access job listings and information on their state’s labor market trends using the Internet, and many said that claimants have access to online labor exchange, or job matching, services as well as other self-assisted services such as resume writing assistance, career guidance, and self-assessment services. Officials in all states also told us that one-stop centers make computers available on-site, and most said that claimants have access to self-help software, such as aptitude tests, computer tutorials, or job search guidance, at the centers. Claimants also have access to a variety of staff-assisted reemployment services through the one-stop system. Officials most often mentioned that claimants were likely to be offered job search assistance; resume assistance; job matching, referral, and placement services; referral to WIA or other partners; initial or general needs assessment; interview assistance. Some states have also undertaken special initiatives to expand the types of reemployment services available to claimants. Maryland, for example, responded to growth in white collar unemployment in the early 1990s with the establishment of the Professional Outplacement Assistance Center. This program provides outplacement services for executive, professional, technical, and managerial workers who are unemployed, and if capacity allows, those who are underemployed. The program begins with an interactive three day orientation targeted to the needs of professionals and then offers participants networking opportunities through occupational affinity groups that bring together job seekers from similar occupations. Former participants also forward information on job opportunities to the program and offer assistance to current participants — a concept the staff term Pay-It-Forward. UI program requirements often provide the context for states’ efforts to link claimants to reemployment services. In satisfying the requirement that claimants be able and available for work, officials in 44 states told us that claimants are required to register for work with the state’s labor exchange. In addition, officials in all but one state told us that claimants must meet a work search requirement in order to remain eligible for benefits. The work search requirement varies across states but is typically defined in terms of the number of contacts claimants are required to make with employers. In about half of the states with a work search requirement, officials told us claimants subject to this requirement are required to make a specified or minimum number of job contacts, ranging from one to five contacts per week. In the rest, the required number of contacts is determined by what is seen to be reasonable for a particular area or occupation or the requirement is stated in more general terms. Claimants document that they are meeting their state’s work search requirement in a number of ways, most commonly by keeping a log of work search activities that may be subject to review or by certifying they are able and available to work through the process of filing for a continuing claim. Washington, for example, has recently revised its work search requirement to be more specific, requiring each week that claimants make three job contacts, participate in three in-person reemployment services at a one-stop center, or complete some combination of the two. Claimants keep a log of these contacts and activities, which is subject to random review. In Michigan, as in many states, when claimants call in to the state’s automated telephone system each week to file for their continuing claims, they must also certify that they are available for and seeking full-time work. In all states with a work search requirement, officials told us that the primary consequence faced by claimants who fail to comply is that they could be denied benefits. However, the length of time for which benefits are denied, and the extent to which claimants receive a warning prior to being denied benefits, varies across states. These work registration and work search requirements often serve to link claimants to reemployment services. The process of registering for work with the state’s labor exchange, for example, may bring claimants into an Employment Service office or one-stop center where reemployment services are delivered. Officials in nearly two-thirds of the 44 states where claimants are required to register for work told us that coming into an Employment Service office or one-stop center is either a required part of the process or one of the options claimants have for completing their registration. Officials in close to a third of the states with this requirement told us claimants are automatically registered with the labor exchange when they file their initial UI claim. In Michigan, for example, most claimants file their initial claim remotely and may begin the work registration process remotely as well by placing their resume on the state’s public online labor exchange. They must come into a one-stop center, however, to have their resumes validated by one-stop staff in order to complete the work registration process. In Washington, on the other hand, claimants who are required to look for work are automatically registered for work at the same time they file an initial telephone or Internet claim. Under this system, claimant information is uploaded into the state’s workforce development management information system and becomes available to one-stop center staff. Some states also use their processes for monitoring compliance with the work search requirement to direct claimants to reemployment services. Officials in 39 of the 49 states that require claimants to actively seek employment told us that telephone or in-person interviews with claimants may be used to monitor compliance with this requirement. In over two- thirds of these states, officials told us that some information on job search strategies or reemployment services is provided during the interview. The level of information varies from suggestions offered on a case-by-case basis to a discussion of strategies and services that is a standard part of the interview. In Georgia, for example, the state’s eligibility review program is used to determine whether a claimant faces particular problems in returning to work and if a claimant is making use of available reemployment services, in addition to determining eligibility and compliance with state work search rules. States also engage some claimants in reemployment services directly through programs that identify certain groups for more targeted assistance. States primarily target reemployment services to claimants identified as most likely to exhaust their UI benefits before finding work through federally required claimant-profiling systems. While claimants identified and referred to services through profiling can access the services available to all job seekers through the one-stop system, participation in the services they are referred to is mandatory for profiled claimants. Specifically, state officials most often identified orientation and assessment as services profiled claimants were required to receive. In addition, many officials told us that the services profiled claimants received depended on their individual needs following an assessment, the development of an individual plan, or the guidance of staff at a one-stop center. While failure to report to required reemployment services can result in benefits being denied, states vary in the conditions that prompt denying benefits. Maryland, for example, targets reemployment services to profiled claimants through its Early Intervention program. This program, which began in 1994, offers an interactive, 2-day, 10-hour workshop, addressing self-assessment, job search resources, resume writing and interviewing skills, and other community resources available to job seekers. Profiled claimants selected for the workshop who fail to attend are given one opportunity to reschedule; after that, their failure to participate is reported to the UI program and their benefits may be suspended. When claimants complete the workshop, they are registered with the Maryland Job Service, they receive an individual employment plan, and the workshop facilitator may refer them to additional services. Officials told us that although they currently do not have data to show the impact of this program, they have received very positive feedback about the quality and effectiveness of the workshops. From our site visits we also learned that some states have developed additional methods to target reemployment services to particular groups of UI claimants. For example, one-stop staff in Washington have the ability to identify various subgroups of claimants using a tracking device called the Claimant Progress Tool. Officials told us that one-stop staff typically use this tool to identify claimants who are about 100 days into their claim, and then contact them for targeted job search assistance and job referrals. This process was developed to help the state achieve a goal of reducing the portion of their UI benefits that unemployed workers claim. Georgia’s state-funded Claimant Assistance Program identifies claimants who are seen to be ready for employment and requires them to participate in the same services required of profiled claimants. This program is designed to help the state achieve its goal of generating savings for the UI Trust Fund. Claimants meeting this program’s eligibility criteria also have the option of participating in the Georgia Works program, a recent state initiative to promote on-the-job training opportunities for UI claimants. Through Georgia Works, claimants receive 20 hours of on-the-job training weekly for 8 weeks while continuing to receive their UI benefits. States often make use of Labor’s Reemployment Services Grants — available since 2001 for direct services to UI claimants — to fund these services. Officials in the majority of the states we interviewed told us their states have been using the Reemployment Services Grant funds to hire staff to provide reemployment services. For example, Maryland state officials said they use their funds to hire staff for the Early Intervention program, which has enabled them to run more workshops in areas that need them and to make further improvements in the program. Some states have also used these grants to direct reemployment services to claimants beyond those who have been profiled and to support other enhancements in the provision of reemployment services to claimants. For example, Washington state officials told us they used funds from these grants to support the development of the Claimant Progress Tool. Despite states’ efforts to design systems that link UI claimants to reemployment services, little data are available to gauge the extent to which claimants are receiving these services or the outcomes they achieve. While states must meet a number of federal reporting requirements for their UI programs, and for their federally funded employment and training programs, none of these reports provide a complete picture of the services received or the outcomes obtained by all UI claimants. Furthermore, we found that few states currently go beyond the federal reporting requirements to monitor the extent to which claimants are receiving services from the range of federally funded programs that are designed to assist them, and even fewer monitor outcomes for these claimants, largely because of limited information systems capabilities. Labor has some initiatives that may begin to shed light on claimant services and outcomes, but some limitations remain. UI claimants may access reemployment assistance from a number of federally-funded programs, most often Wagner-Peyser Employment Service, WIA Dislocated Worker or WIA Adult, and Trade Adjustment Assistance (if they are dislocated because of trade). To monitor the performance of these programs, Labor requires states to meet a number of reporting requirements, but the reports are submitted on a program-by- program basis. None of the reports provides a complete picture of the services received or the outcomes obtained by all UI claimants. UI reporting requirements. States must track and report annually on several performance measures considered key indicators of UI program performance—a system named UI Performs—but as currently configured, the system does not contain any measures related to services or outcomes for claimants. Instead, the measures focus exclusively on benefit and tax accuracy, quality, and timeliness. States also must report monthly on their UI claims and payment activities through form ETA 5159. These reports provide summary information that can be used to calculate average benefit duration and exhaustion rates at an aggregate level by state. These data are useful in following trends over time, but, do not contain information on those who had received services and those who did not. In addition, states must also report to Labor on their claimant profiling process—termed Worker Profiling and Reemployment Services—but information in these reports represent only a portion of all UI claimants the state has served. The two profiling reports—ETA 9048 and 9049— require states to provide summary information on the number of claimants targeted for services through the profiling process, and on the reemployment services and outcomes for this group of claimants. While the reports contain information on claimant services and outcomes, the data represent only the portion of claimants who were identified through profiling as likely to exhaust their benefits and who were also referred to services. This proportion can vary from place to place and from month to month depending on available resources, but may be a small proportion of all of the state’s claimants. Wagner-Peyser Employment Service reporting requirements. States must provide quarterly reports for the Employment Service program, but these reports do not provide a complete picture of all claimants receiving reemployment services. The reports consist of summary information on the numbers of Employment Service participants who received specified services or who obtained certain outcomes. The report tracks service and outcome data by several demographic categories, including age grouping, gender, and whether or not the participant was a UI claimant. However, the report contains information on only those individuals who are registered with the Employment Service, and while all who receive services funded by Wagner-Peyser must be registered with the Employment Service, not all UI claimants receive Wagner-Peyser-funded services. WIA and the TAA programs. WIA and TAA reporting requirements are similarly limited and do not provide a complete picture of claimant services and outcomes. WIA tracks several performance measures directly related to outcomes for Adults and Dislocated Workers, including job placement, job retention, and wage gain or wage replacement. Labor requires states to report their performance on these measures in both quarterly and annual reports. In addition, once each year states submit a file to Labor, the WIA Standardized Record Data (WIASRD) file, containing a complete record of demographic, services, and outcome information on each WIA registrant who has exited the program. While these records contain information on whether or not the WIA registrant is also a UI claimant, they do not contain information for those claimants who are not registered under WIA. We and others have noted that many individuals served under WIA—particularly those who receive only self-directed services—are not registered or tracked for performance and are, therefore, not reflected in any of the WIA data. Similarly, for the TAA program Labor requires states to submit participant data files on all who exit the program each quarter, but the reports are limited to those claimants served by TAA. Table 1 summarizes these reporting requirements and their limitations for measuring overall claimant services and outcomes. Having data that show the degree to which reemployment services are reaching UI claimants is key to good program management and provides a first step toward understanding the impact of these programs. However, knowing how many claimants may be accessing reemployment services and the type of outcomes they may be achieving has proven difficult for state and local officials. Only 14 states reported that they go beyond the federal reporting requirements to routinely track the extent to which claimants receive services from the broad array of federally funded programs that are designed to assist them. Of the states that reported that they did not routinely track claimant services, 4 states told us it would not be possible to do so. Overall, 37 states told us doing so was somewhat or very difficult, while 6 states said it was not at all difficult (see fig. 2). States most often told us that tracking claimant services across multiple programs was made difficult by the fact that reemployment services and UI claimant data were maintained in separate data systems—systems that were either incompatible or difficult to link. (See fig. 3.) While relatively few states routinely track claimants’ services, even fewer track outcomes. Only 6 states told us that they go beyond the federal reporting requirements to routinely monitor any outcomes for the subset of UI claimants that receives reemployment services—outcomes such as entered employment rate, average benefit duration, and UI exhaustion rate. Eleven states told us it would not be possible to calculate any of the outcomes for these claimants. More states reported difficulty tracking the entered employment rate than the average benefit duration or UI exhaustion rate. (See fig. 4). The issues states cited in tracking outcomes across programs for UI claimants were similar to those for tracking use of services. Most states (35) told us that tracking one or more outcome measures was made difficult by the fact that reemployment services and UI claimant data were maintained in different systems that were either incompatible or difficult to link. Four states said in written comments that our definition of claimants—that they received a first payment—contributed to the difficulty in performing the calculations. Labor has some initiatives that may begin to shed light on claimant services and outcomes, but the efforts still fall short of giving us a nationwide understanding of services and outcomes for UI claimants. UI performance measures. Labor is modifying its UI performance measures, and some of the changes will begin to focus attention on claimant outcomes. Beginning in summer 2005, in addition to reporting on benefit timeliness and accuracy, states will be required to track a reemployment rate for their UI claimants—defined as the percentage of UI claimants who are reemployed within the quarter following their first UI payment. This change will improve the understanding of how many UI claimants are quickly reemployed nationwide, but, it will not provide information on claimants who become reemployed after the first quarter. Further, it will not allow for an assessment of how many claimants access reemployment services nor will it allow the outcomes claimants achieve to be attributed to services. Employment Service, WIA, and TAA reporting changes. Labor is also modifying its reporting requirements for Employment Service, WIA, and TAA programs. With the transition beginning in July 2005, states’ Employment Service, WIA, and TAA programs will be required to report on their performance using a new set of common measures—measures that use the same data definitions and data coding across all included programs. The new measures, focused on job placement, employment retention, and earnings increase, will help eliminate some of the definitional difficulties states faced as they tried to measure performance across multiple programs. In addition, it will require that states begin counting all job seekers who use the one-stop, including those who only receive services that are informational or self-service in nature. However, because the Unemployment Insurance program is not included in these measures, this change would not allow for a complete assessment of UI claimants’ use of services. Future plans for reporting on performance for Labor’s Employment and Training Administration (ETA) programs include the development of a system to consolidate reporting. This system—ETA’s Management Information and Longitudinal Evaluation (EMILE) system—would consolidate performance reporting across a range of Labor programs including WIA, Employment Service, and TAA. Current plans do not include incorporating UI reporting into EMILE. We recently reported that implementing a comprehensive reporting system across workforce programs could provide a better picture of the one-stop system, but recommended that Labor consider greater ongoing consultation with key stakeholders, including states, in order to enhance its efforts to implement it. Labor is currently conducting a feasibility study on implementation issues associated with EMILE, and, at present, it is unclear how soon such a system could be implemented. Administrative Data Research and Evaluation (ADARE). Because Labor lacked the capacity to evaluate services across the broad array of employment and training programs, it commissioned ADARE to begin to fill the gap. ADARE is an alliance of 9 state partners—Florida, Georgia, Maryland, Missouri, Texas, Illinois, Washington, California, and Ohio—that cover 43 percent of the country’s civilian workforce. ADARE provides third-party researchers with detailed, longitudinal administrative data from the 9 states on participants in several programs, including Employment Service, WIA, Temporary Assistance for Needy Families (TANF), and Perkins Vocational Education, as well as UI wage and benefit records and education records. ADARE efforts so far have focused largely on evaluating welfare-to-work programs and WIA. Currently under way is an effort to examine three facets of UI claimant behavior—repeat claims, benefit exhaustion, and reemployment profiles. Unfortunately, planned expansions of the data collection have been slower to implement than originally anticipated, and some of the data used in ADARE, such as the WIA performance data, are limited. Having the capacity to link data across multiple programs within a state is a major leap forward in understanding UI claimants’ participation in a broad array of programs and to measure some of their outcomes. But, while the participating states represent a relatively large proportion of the workforce, they don’t provide a nationwide perspective. In addition, until WIA’s new reporting requirements go into effect, the WIA data will be limited to those claimants who are registered under WIA. Five-Year Evaluation. Labor has also begun a 5-year national study of the UI benefits program. The evaluation is intended to provide detailed information on the effectiveness of the UI program in light of its goals and underlying program design. Researchers hope to enlist up to 25 states willing to share their data, and the study seeks to identify, in part, changes in the labor market, population, and economy relative to the UI program, as well as detailed characteristics of who receives and does not receive UI benefits. As part of the study, researchers are hoping to learn more about the extent to which UI claimants are receiving reemployment services in those states, and about the outcomes they are achieving, including how long claimants receive benefits. However, at this point, it is too soon to know how successful they will be in obtaining information on claimants’ use of the broad array of programs designed to serve them. And because it is limited to states that are willing to participate, it, too, falls short of providing a nationwide perspective. States have increasingly shifted to requiring that most UI claimants file their claims remotely. To help them get the reemployment services they need to facilitate their reemployment, states have often designed their processes to help link claimants to reemployment services. However, knowing how many claimants are actually accessing reemployment services has proven difficult for state and local officials. Most states lack this information, arguably critical for good program management, often because data reside in separate systems that cannot be easily linked. In the new environment created by WIA, where claimants may be served by a range of programs that go beyond Unemployment Insurance and Employment Service, it becomes increasingly important to find new ways to link program data across a broader range of programs. Current reporting requirements are not enough to provide a complete picture. Labor has some initiatives underway to help fill this gap, but the issue of collecting complete information on those individuals served by the nation’s workforce development system—mainly through the one-stops— needs to be viewed in a broader context, not program-by-program. The nine-state effort under ADARE to link administrative data on participants in a range of programs is a step in the right direction, but doesn’t include information on all services claimants receive. The common measures and EMILE initiative are steps to provide more comprehensive and complete information on those served by the one-stops, including unemployment insurance claimants who come in to the one-stops for services. However, the present EMILE proposal does not include a link to Unemployment Insurance administrative data, so it will not be able to provide information on all UI claimants, only those who receive services through a one-stop. As such, EMILE cannot be used as a source of information on benefit duration. Taken together, these efforts will not be able to provide all states with an understanding of services and outcomes for all UI claimants, an understanding that is critical for assessing the performance of the program or the potential need for future reforms. We recommend that as Labor develops EMILE, the Secretary of Labor work with states to develop a plan for considering the feasibility of requiring states to collect more comprehensive information on UI claimants’ use of reemployment services and the outcomes achieved by claimants, including the length of time claimants receive UI before they are reemployed. We provided a draft of this report to Labor officials for their review and comment. Labor generally agreed with our findings, but took issue with our recommendation that it work with states to consider the feasibility of collecting more comprehensive information on UI claimants' services and outcomes, saying that its current and planned data gathering and research efforts would provide adequate information to guide policy making. Labor noted that, in addition to the efforts acknowledged in our report, a new initiative will provide additional data on some UI claimants and their reemployment services in the future. Labor also said that, given the burden placed on states to collect and report data, it is important to show a clear benefit to the system for additional data collection. Labor requested that GAO provide additional guidance on how collection of the data is expected to improve services to UI claimants and hasten their reemployment. We continue to assert that comprehensive data on the extent to which UI claimants receive reemployment services and the outcomes claimants achieve is important for program management in an environment where claimants may receive services from a number of different programs. While Labor's new initiatives, in combination with current reporting requirements, will provide valuable information on the reemployment activities of some UI claimants, this information is generally collected on a program-by-program basis or is focused on a single category of claimants. Consequently, these efforts will not allow for a comprehensive, nationwide understanding of claimants' participation in the broad range of reemployment services provided through federal programs nor do they move states in the direction of having the data they need to better manage their systems. In recommending that Labor study the feasibility of a more comprehensive data collection effort, we acknowledge the challenges faced by states to collect and track these data and understand that acquiring a comprehensive picture of UI claimant's participation in reemployment services will have a cost. However, having information on UI claimants who are and are not receiving services is an important step in the development of reemployment efforts that hasten workers’ reemployment and minimize UI benefit costs. Labor also provided technical comments which we have incorporated in our report, as appropriate. A copy of Labor's comments is in appendix III. We will send copies of this report to relevant congressional committees, the Secretary of Labor, and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or members of your staff have any questions about this report, please contact me at (202) 512-7215. Major contributors to this report are listed in appendix IV. We were asked to provide information on (1) the extent to which states have shifted to remote methods for filing initial claims and how they are making claimants aware of their responsibilities to look for work and the services available to assist them, (2) what states are doing to facilitate the reemployment of unemployment insurance (UI) claimants, and (3) what is known about the extent to which unemployment insurance claimants receive reemployment services and about the outcomes of claimants who receive these services. To address these questions, we conducted telephone interviews with unemployment insurance and workforce development officials in all 50 states. We then used a separate brief e-mail instrument to gather more specific information on the strategies states use to collect data on unemployment insurance claimants who receive reemployment services. Additionally, we conducted site visits to 4 states— Georgia, Maryland, Michigan, and Washington—and interviewed state and local officials in these states. We reviewed data and documents from the U.S. Department of Labor (Labor) and other sources. We also interviewed officials from Labor, the National Association of State Workforce Agencies, and the UI Information Technology Support Center, as well as researchers from the University of Texas at Austin, the Upjohn Institute, and the Urban Institute. For this review, we defined reemployment services to mean all reemployment activities funded through Wagner-Peyser; Workforce Investment Act (WIA) Adult and Dislocated Worker services; any other training or job search assistance provided using federal funds, such as Trade Adjustment Assistance; and any state-funded reemployment or training services. We defined UI claimants as individuals who have filed an initial UI claim, been found eligible according to both monetary and nonmonetary criteria, and received a first payment of UI benefits. We provided a draft of this report to officials at the Department of Labor for their review and incorporated their comments where appropriate. We conducted our work from February 2004 through May 2005 in accordance with generally accepted government auditing standards. To collect broad information on unemployment insurance claimants’ use of reemployment services, we conducted telephone interviews with officials in all 50 states from agencies that oversee the unemployment insurance and workforce development programs. We designed a structured computer-assisted telephone interview (CATI) instrument that consisted of closed- and open-ended questions on a range of topics, including methods used in each state to file initial claims, both outside of one-stop centers, employment security offices, and unemployment insurance offices and on site at those locations; work search requirements and available reemployment services and how states notify claimants of them; worker profiling; and states’ data collection efforts related to remote filing, work search requirements, receipt of reemployment services, and performance outcomes the states may track. For a majority of the telephone interview questions, we asked state officials to consider the present status of a topic in their state. We asked them to consider either a particular program or fiscal year for only a few questions. Telephone interviews were conducted during October and November 2004. To better understand states’ issues associated with tracking performance data, and using results from our CATI as a guide, we supplemented our telephone interviews with a brief data collection instrument that asked state officials for greater detail about what states tracked for UI claimants receiving reemployment services. We also asked them about the specific challenges they faced in tracking data on reemployment services and outcomes for all UI claimants. We completed this effort in March 2005. Officials from all 50 states provided responses about their states’ data concerns. Because we surveyed officials from all 50 states, no sampling error is associated with our work. However, nonsampling error could figure into any data collection effort and involve a range of issues that could affect data quality and introduce unwanted variability into the results. We took several steps to minimize nonsampling errors. For example, GAO survey specialists and staff with subject matter expertise collaboratively designed both instruments. Also, the draft telephone interview instrument was pretested with officials in 3 states to ensure that the questions were relevant, clear, and easy to comprehend and that states would have the capacity to readily respond to them. Similarly, the draft data collection instrument was pretested with officials from 2 states. During the telephone interviews, responses were called back to state officials to ensure the data were being accurately captured. To further minimize errors, programs used to analyze data collected through both instruments were independently verified to ensure the accuracy of this work. We selected 4 states for site visits according to several criteria that gave us a range of state unemployment rates (as of March 2004), amounts of program year 2004 WIA Dislocated Worker funding, acceptance of initial UI claims by telephone or Internet, and whether the state had an employer tax-funded state training or job placement program. States selected for site visits are shown in table 2. We also sought recommendations from Labor officials and other experts and considered geographic diversity in our state selections. In each state, we interviewed officials in the workforce development system and UI programs on issues such as labor market information, UI claims filing, worker profiling, work search requirement, reemployment services offered, and data collection and management. In coordination with state officials, we selected two local areas in each state, visiting a mix of urban and rural areas that had been identified by the state as having taken innovative approaches to providing reemployment services to UI claimants. Local areas selected for site visits are shown in table 3. At the local areas, we met with local workforce officials at one-stop or career centers to collect information on UI claims filing procedures, reemployment services offered, how these services are targeted to UI claimants, how UI claimants are linked to services, enforcement of work search requirements, and data collection and use. We also talked with officials at state telephone call centers in Maryland, Michigan, and Washington; a problem resolution office in Lansing, Michigan; and the Professional Outplacement Assistance Center in Columbia, Maryland. We attempted to corroborate the responses collected through the telephone interview and supplemental data collection instruments. To the extent possible, for the states we visited we compared responses gathered through our instruments with information we collected during those visits. During the time of our work, other sources, such as Administrative Data Research and Evaluation (ADARE), that could have acted as comparisons for some items or topics related to unemployment insurance claimants, were not yet available. Based on the comparisons we made, and discussions and interviews we held with agency staff and officials and outside experts, we believe the data are sufficiently reliable to be used in providing information on UI claims and claimants and reemployment services. At the time of our survey, 39 states reported that they accepted telephone initial claims, and 34 said they took Internet initial claims (table 4). Additionally, 29 states reported that they used both remote filing methods, and 6 states said they did not currently accept initial claims remotely by either telephone or Internet. Several states that currently use a single remote filing method—Internet or telephone—indicated to us that they have plans to begin accepting claims by both methods in the future. Dianne Blank, Assistant Director Janice Peterson, Analyst-in-Charge In addition, the following staff made major contributions to this report: Karyn Angulo and Andrew Bauck served as team members and assisted with all phases of the effort; Jennifer Miller, Alison Pan, and Leslie Sarapu assisted with data collection; Kevin Jackson advised on design and methodology issues; Erin Daugherty, Theresa Chen, R. Jerry Aiken, and Catherine Hurley assisted with data analysis; Susan Bernstein and Stan Stenersen advised on report preparation; Jessica Botsford advised on legal issues; and Lise Levie and Regina Santucci verified our findings. Workforce Investment Act: Labor Should Consider Alternative Approaches to Implement New Performance and Reporting Requirements. GAO-05-539. Washington, D.C.: May 27, 2005. Unemployment Insurance: Information on Benefit Receipt. GAO-05-291. Washington, D.C.: March 17, 2005. Trade Adjustment Assistance: Reforms Have Accelerated Training Enrollment, but Implementation Challenges Remain. GAO-04-1012. Washington, D.C.: September 22, 2004 Workforce Investment Act: States and Local Areas Have Developed Strategies to Assess Performance, but Labor Could Do More to Help. GAO-04-657. Washington, D.C.: June 1, 2004 Workforce Training: Almost Half of States Fund Employment Placement and Training through Employer Taxes and Most Coordinate with Federally Funded Programs. GAO-04-282. Washington, D.C.: February 13, 2004 Workforce Investment Act: One-Stop Centers Implemented Strategies to Strengthen Services and Partnerships, but More Research and Information Sharing Is Needed. GAO-03-725. Washington D.C.: June 18, 2003. Multiple Employment and Training Programs: Funding and Performance Measures for Major Programs. GAO-03-589. Washington, D.C.: April 18, 2003 Unemployment Insurance: States’ Use of the 2002 Reed Act Distribution. GAO-03-496. Washington, D.C.: March 6, 2003. Workforce Investment Act: Better Guidance and Revised Funding Formula Would Enhance Dislocated Worker Program. GAO-02-274. Washington, D.C.: February 11, 2002 Workforce Investment Act: Improvements Needed in Performance Measures to Provide a More Accurate Picture of WIA’s Effectiveness. GAO-02-275. Washington, D.C.: February 1, 2002. Unemployment Insurance: Role as Safety Net for Low-Wage Workers Is Limited. GAO-01-181. Washington, D.C.: December 29, 2000.
With unemployed workers at a greater risk of long-term unemployment than in the past, it is increasingly important to quickly connect Unemployment Insurance (UI) claimants with reemployment activities. However, the shift to remote claims filing in many states has raised concerns about maintaining a connection between the UI program and reemployment services. This report examines (1) the extent to which states have shifted to remote claims filing and how they are making claimants aware of program requirements and services, (2) what states are doing to facilitate reemployment of UI claimants, and (3) what is known about the extent to which UI claimants receive reemployment services and about their outcomes. Nearly all states accept most initial UI claims remotely by telephone, the Internet, or both. Even though claimants filing remotely no longer have face-to-face contact with UI staff at the time the claim is filed, all states told us they have found ways to provide information on eligibility requirements and reemployment services to individuals filing claims, such as by including this information in the scripts used by claims takers at UI call centers or as documents on Web pages. Officials from most states told us the shift to remote claims has not diminished their ability to provide information or deliver services to claimants. In fact, some report that this shift may have improved their ability to serve their customers. cross states, claimants have access to a variety of reemployment services, and states make use of UI program requirements to connect claimants with available services at various points in their claim. All federally approved state UI programs require that claimants be able and available to work, and in many states these requirements also serve to link claimants to reemployment services. States also engage some claimants in reemployment services through programs that identify certain groups for more targeted assistance. States primarily target reemployment services to claimants identified as most likely to exhaust their UI benefits before finding work, through federally required claimant profiling programs. Little is known about the extent to which claimants receive services from the broad array of programs designed to assist them or about the outcomes they achieve. States must meet a number of federal reporting requirements for their UI and employment and training programs, but none of these reports provides a complete picture of the services received or the outcomes obtained by UI claimants. GAO also found that few states monitor the extent to which claimants are receiving these services, and even fewer monitor outcomes for these claimants, largely due to limited information systems capabilities. Labor has some initiatives that may begin to shed light on claimant services and outcomes, but none will provide a complete picture.
From World War II to the end of the Cold War, the United States and the former Soviet Union produced large quantities of plutonium to build nuclear weapons. With the lessening of tensions between the United States and Russia, efforts began to reduce the inventory of both countries’ excess plutonium. In early 1994, Presidents Clinton and Yeltsin endorsed the goal of nuclear arms reduction and directed experts to begin studying options for the long-term disposition of plutonium and other nuclear materials. In 1995, the United States declared that 38.2 metric tons of weapons-grade plutonium was no longer needed for national security and was, therefore, excess. DOE also designated 14.3 metric tons of non-weapons-grade plutonium as excess. Because a portion of the plutonium declared excess is scrap or residue with low contents of plutonium, it is unsuitable for fabrication into mixed oxide (MOX) fuel and is better suited for immobilization instead. According to DOE, plutonium scheduled for disposition will come primarily from (1) metal that may have been in a retired nuclear weapon, (2) oxides, (3) unirradiated fuel, and (4) irradiated fuel. Securing plutonium derived from these sources will require conversion into forms that meet the “spent fuel standard.” This standard, which was introduced by the National Academy of Sciences and endorsed by DOE, requires that plutonium be made roughly as unattractive and difficult to retrieve and use in nuclear weapons as the plutonium that exists in spent fuel from commercial nuclear power reactors. DOE plans to convert about 50 metric tons of excess plutonium into forms suitable for eventual disposal. Of the total, DOE plans to immobilize about 17 tons and could process the remainder as MOX fuel, although a final decision on whether to burn or immobilize this plutonium has not been made. As figure 1 shows, it is estimated that Russia has about twice as much weapons-usable plutonium (consisting of weapons-grade and other grades) as the United States. At the April 1996 Summit on Nuclear Safety and Security held in Moscow, the leaders of the G-7 countries plus Russia called for further study of ways to manage excess nuclear materials, including plutonium. In October 1996, representatives from many countries, including the United States and Russia—as well as representatives from private industry—met in Paris and concluded that (1) the safe and effective management of excess nuclear materials is technically feasible; (2) no solution is rapid, simple, and inexpensive; and (3) two existing technologies—burning the plutonium as a fuel in nuclear reactors and immobilizing the plutonium in glass or ceramics—are viable, complementary disposition options. An interagency group has been established in the United States under the joint chairmanship of the White House Office of Science and Technology Policy and the National Security Council to oversee plutonium disposition. DOE, as the agency with primary responsibility for managing the disposition of plutonium, established the Office of Fissile Materials Disposition, which is responsible for implementing nuclear materials storage and disposition. This office has the technical lead for disposition-related technological activities with Russia, which are coordinated by the Office of Science and Technology Policy. U.S. executive branch officials told us that the United States and Russia should ultimately reduce their plutonium stockpiles to equivalent levels. However, achieving these reductions is a formidable challenge because DOE’s immobilization and MOX technologies have not been demonstrated on an industrial scale in the United States, and licensing, regulatory, environmental, economic, and transparency (assurance that plutonium to be dispositioned comes from weapons) issues need to be addressed for both disposition options. Furthermore, Russia may not have the financial resources to implement its program in a time frame that would be comparable to the U.S. disposition schedule. In January 1997, DOE formally announced that it would pursue two technologies to convert excess plutonium to safer, more proliferant-resistant forms. For planning and analysis purposes, DOE anticipates converting about 50 metric tons of excess plutonium over the next 25 years. The total U.S. plutonium inventory is approximately 99.5 metric tons. On the basis of preconceptual design data and preliminary plans, DOE estimates that implementing its plutonium disposition program—excluding long-term storage—will cost approximately $2.2 billion. This amount includes DOE’s costs to immobilize plutonium as well as to burn MOX fuel. By using a disposition strategy that uses both technologies, DOE hopes to maximize the likelihood of the U.S. program’s being successfully completed. DOE also hopes that the U.S. plan for MOX fuel will provide additional encouragement for Russia to undertake a reciprocal disposition program. According to U.S. government officials, ultimately it is important that both countries agree to reduce their remaining plutonium stockpiles to equivalent levels. The Deputy Minister of Russia’s Ministry of Atomic Energy (MINATOM) told us that Russia’s only acceptable disposition option for the bulk of its excess plutonium is burning it in nuclear power reactors because Russia considers the plutonium a valuable source of energy. The Deputy Minister also noted that Russia favors burning MOX fuel because this process—unlike immobilization—changes the content of the plutonium, thereby making it difficult to use in a nuclear weapon.However, according to State Department officials, MINATOM’s Minister has also stated that immobilization may be acceptable for scrap and low-grade residues. According to DOE officials, the United States will not fully implement its plutonium disposition program unless Russia implements a comparable plutonium disposition program. DOE’s Acting Director of the Office of Fissile Materials Disposition told us that it would be unacceptable for DOE to request full funding to convert approximately 50 metric tons of U.S. plutonium into more proliferant-resistant forms without Russia taking corresponding actions. DOE officials told us that, in their opinion, a U.S.-only plutonium disposition program would not be supported by the Congress because it could put the United States at a strategic disadvantage. Furthermore, by acting unilaterally, the United States would lose leverage in future negotiations with Russia on plutonium disposition. A Department of State official told us that other nations would be concerned that a program involving only the United States would have a marginal impact on reducing the worldwide risks of nuclear proliferation. Officials from the U.S. Arms Control and Disarmament Agency (ACDA) noted that there are risks and costs if the United States does not pursue plutonium disposition even if Russia does not implement a similar program. DOE’s plutonium disposition program is expected to be completed in about 25 years but faces technological uncertainties that could increase program costs and time frames because neither disposition technology has been demonstrated on an industrial scale in the United States. Although immobilization has been used for other purposes, it has never been used on a large scale for plutonium disposition. Unresolved questions include how the plutonium will react in the immobilization processing, how stable and durable the immobilized material will be, and how difficult it will be to recover the plutonium from the immobilized forms and use it in nuclear weapons. MOX fuel derived from reactor-grade plutonium has been used extensively in nuclear power reactors throughout Europe, and the technology is well established. Although the technology is well known, the United States has no nuclear power reactors licensed by the Nuclear Regulatory Commission to burn MOX fuel. Furthermore, MOX fuel derived from weapons-grade plutonium has not been burned in commercial nuclear power reactors except on a test basis in Russia. The United States has no facilities to make MOX fuel and DOE has not determined the number or locations of the commercial nuclear power reactors that will be needed to burn MOX fuel. Resolving these issues will depend not only on the development of the disposition technologies but also on contract negotiations with nuclear reactor owners, licensing requirements, and environmental reviews. However, according to DOE, the overall technical risk of either disposition option is relatively low. Uncertainties also exist with the underground repository where DOE plans to permanently dispose of excess plutonium. While DOE assumes that a permanent repository at Yucca Mountain, Nevada, will be ready to accept the plutonium in 2010 (12 years later than originally planned), it can not be certain that the repository will open. DOE is currently assessing the Yucca Mountain site to determine its viability. According to U.S. executive branch officials, Russia’s plutonium disposition efforts are not as advanced as U.S. activities and face impediments, including Russia’s ongoing production of weapons-grade plutonium. Russia produces about 1.5 metric tons of plutonium each year at nuclear reactors at Tomsk and Krasnoyarsk. The plutonium is produced by Russian reactors that also provide heat and electricity to nearby cities. In 1994, Russia agreed to shut down those reactors by 2000. However, in 1997, the United States and Russia signed an agreement to modify the reactors rather than permanently shut them down, as a means of stopping the production of weapons-grade plutonium. The United States has been providing assistance to complete the modifications, although progress in implementing the agreement has been slow. U.S. officials believe, however, that Russia is making some progress toward establishing a framework for a plutonium disposition program. For example, in July 1997, Russia’s President Yeltsin established a committee under his Defense Council to oversee Russia’s plutonium disposition, including developing a plan. Furthermore, in September 1997, President Yeltsin declared that Russia would remove up to 50 tons of plutonium from its stockpile over time—roughly the same amount that the United States declared excess. According to DOE, the costs for the disposition of about 50 metric tons of plutonium in Russia could range from $1 billion to $2 billion. In developing a plutonium disposition program, Russia faces the same technological issues as the United States. Furthermore, Russia’s ability to undertake a successful program depends upon international financial assistance. According to the Deputy Minister of MINATOM, the pace of Russia’s program will depend on the financial support it receives from the international community, including the United States. France and Germany are considering financing—with some Russian support—a pilot facility in Russia to convert plutonium into MOX fuel. French government officials told us, however, that although the donor governments can be expected to provide some of the financing, most of it will have to come from European investors. They noted that private investment is uncertain because potential investors may not be willing to accept the financial risk without some assurances that the MOX fuel fabrication enterprise in Russia will be commercially viable. Officials from DOE, the State Department, and the White House Office of Science and Technology Policy, as well as representatives from some nations with a commercial and/or security interest in supporting Russia’s disposition efforts (e.g., France, Germany, Canada, and Belgium), told us that insufficient funding is a major obstacle to implementing a disposition program in Russia. As is the case in the United States, major capital expenditures are needed in Russia to build a plutonium conversion plant, construct a MOX fuel fabrication facility, and modify and license nuclear power reactors to burn the MOX fuel. Russia’s limited number of nuclear power reactors that are capable of burning MOX fuel could affect its ability to disposition its excess plutonium in a time frame comparable to that of the United States. Although Russia has seven operational VVER-1000 pressurized water reactors, which are capable of burning MOX fuel, DOE officials and other experts said that it is possible that Russia could use up to six of these reactors. In addition, another type of reactor, a BN-600 at Beloyarsk, could be used. According to Canadian officials, if Russia’s four VVER-1000 reactors and the BN-600 reactor were used to burn the MOX fuel, it would take at least 40 years to burn about 50 metric tons of Russia’s plutonium. According to DOE, if Russia also used the two other VVER-1000 reactors, the plutonium could be burned in 28 years. A 1996 State Department analysis noted that if Russia’s VVER-1000 reactors were used, their planned 30-year operating lives would have to be extended. This extended usage could have an impact on the overall cost of the Russian program because modifications to the reactors may be required. Figure 2 shows the location of Russia’s VVER-1000 reactors, a BN-600 reactor, and the sites where weapons-grade plutonium has been or continues to be produced. Note : The numbers within the symbols show the number of reactors at each site. DOE officials said that 11 additional VVER-1000 reactors operating in Ukraine could be used to burn plutonium, thereby accelerating the rate of disposition. According to DOE, an additional VVER-1000 reactor, if completed, could also be used. Russia’s Deputy Minister for Atomic Energy told us that there have been some preliminary discussions with Ukraine’s government officials about using their reactors to burn MOX fuel and that they did not have serious concerns about using their reactors. Although concerns exist about the number of VVER-1000 reactors that Russia may use to burn MOX fuel, experts believe these reactors can burn the fuel derived from weapons-grade plutonium safely. Officials from DOE, Oak Ridge National Laboratory, and the International Atomic Energy Agency, as well as representatives from France, Belgium, and Germany, told us that it is technically feasible for MOX fuel derived from weapons-grade plutonium to be used in these reactors. While some of these officials recognize that additional testing and analysis is required, they told us that there are no major technical impediments to burning MOX fuel safely. According to a September 1996 U.S.-Russian plutonium disposition study,preliminary analyses indicate that the VVER-1000s could safely burn MOX fuel, though some modifications to the reactors might be necessary. The study (1) estimated that the cost to modify the seven VVER-1000 reactors totaled $77 million and (2) noted that Russia could complete construction of three partially built VVER-1000 reactors, which could help increase the consumption of MOX fuel. The cost to complete the reactors could range from $500 million to $750 million. According to officials from DOE, the Department of State, and the White House Office of Science and Technology Policy, an agreement between the United States and Russia on plutonium disposition should be negotiated before large-scale expenditures are made for U.S. disposition facilities. These officials said that a bilateral agreement should address such major issues as the following: the quantities of plutonium to be dispositioned by both countries and the amounts of plutonium that will remain in their respective military stockpiles; the dates when both sides plan to complete the dispositioning of their excess plutonium; the methods to ensure that plutonium and disposition facilities are properly safeguarded to reduce the risks of diversion and/or theft; the assurances that the plutonium to be dispositioned will be subject to verification and inspection measures; the assurances that the facilities to fabricate MOX fuel will only be used for plutonium disposition until all declared excess weapons plutonium is processed through them and that spent nuclear fuel will not be reprocessed and recycled for continued use in civilian nuclear power reactors as long as Russia has surplus stocks of weapons plutonium; and the funding arrangements. Obtaining agreement with Russia on the procedures to ensure U.S. access to nuclear materials from dismantled weapons may prove difficult. As we reported in September 1996, the United States and Russia were unable to conclude an agreement specifying exactly how prior Russian assurances of access would be implemented at an interim storage facility at Mayak.This facility, which is partially being constructed with U.S. funds, is expected to store 50,000 containers of material from dismantled nuclear weapons in Russia. The lack of progress in agreeing on inspection rights at Mayak is due largely to a U.S.-Russian impasse on completing a broader agreement on reciprocal access measures. Currently, there are no formal negotiations between the United States and Russia on implementing a plutonium disposition program. U.S. government officials told us, however, that such an agreement should be signed within the next 2 to 3 years or else the future of the U.S. disposition program could be jeopardized. In their view, an agreement should be in place—and Russia needs to begin a parallel program—before the United States begins to spend significant funds to construct U.S. facilities, such as the immobilization facility and associated processing facilities and the MOX fuel fabrication plant. DOE and MINATOM are negotiating a more narrowly focused agreement to address the technical arrangements related to joint testing of disposition technology and pilot-scale demonstrations. However, DOE officials said this agreement does not replace the need for a broader bilateral agreement. DOE has not yet made large capital expenditures for its plutonium disposition program. As figure 3 shows, DOE plans to spend about $550 million during fiscal years 1998 through 2007 on design, construction, and equipment projects for disposition-related activities in the United States, including the construction of a facility to fabricate MOX fuel. DOE officials estimated that the United States will provide between $40 million and $80 million over the next 5 to 7 years to assist Russia’s disposition program. Most of this funding is designated to construct a pilot-scale facility in Russia to convert the plutonium metal removed from nuclear warheads into plutonium oxide, a fine powdery substance of plutonium combined with oxygen. Once in this form, the plutonium would be subject to international inspection and could either be immobilized in glass or ceramics or be used in MOX fuel. According to DOE officials, the pilot facility should begin operations in 2005. They also told us that due to funding uncertainties, the U.S. cost to support Russia’s program could increase over time if assistance from other countries is not forthcoming and the United States decides to absorb those costs. During fiscal years 1995 through 1997, DOE had budgeted $13.9 million for Russian activities related to plutonium disposition. Of that total, $8.5 million was budgeted for six joint demonstration technology projects, and $5.4 million was budgeted for studies, travel, weapons dismantlement, and support provided by DOE’s national laboratories and the Amarillo National Resource Center for Plutonium. The demonstration projects include (1) burning a modified type of MOX fuel in a Canadian reactor, (2) fabricating MOX fuel pellets, (3) validating computer codes for analyzing VVER-1000 reactors, (4) studying the feasibility of converting a Russian reactor so it can burn MOX fuel, (5) studying ways to change plutonium from dismantled nuclear warheads into safer forms and store them, and (6) developing immobilization technologies. Appendix II discusses the status of these demonstration projects. Figure 4 shows the distribution of the $8.5 million for these projects. MOX fuel related ($4.1 million) Plutonium conversion ($2.1 million) Note 1: DOE reported that about $4 million had been spent on these projects as of July 31, 1997. Note 2: MOX fuel-related projects include verifying safety data, fabricating MOX pellets, and fabricating VVER-1000 MOX fuel. Note 3: Total costs does not equal $8.5 million due to rounding. Representatives of the U.S. government, private industry, and nongovernmental groups have differing views about the potential effects of DOE’s plutonium disposition program on nuclear proliferation. Some representatives contend that DOE’s decision to consider burning plutonium in the form of MOX fuel in commercial nuclear power reactors may pave the way for the future use of plutonium in the U.S. nuclear industry through plutonium reprocessing. Furthermore, there is a concern that Western assistance would help create a MOX fuel industry in Russia that does not now exist and would increase the risk of the diversion or the theft of nuclear material. DOE’s decision to burn plutonium in the form of MOX fuel in commercial nuclear reactors has focused attention on plutonium’s value as an energy source but also has raised concerns about nuclear proliferation. The United States does not encourage the civilian use of plutonium and does not engage in plutonium reprocessing to generate nuclear power.However, many countries, including France, Belgium, Germany, the United Kingdom, Russia, and Japan, believe that plutonium is a valuable fuel and have programs to reprocess and recycle it. DOE officials and representatives from the U.S. nuclear industry told us that the disposition program does not conflict with or reverse established U.S. policy—as some critics contend—because it does not include reprocessing and recycling and is limited to plutonium that has been separated from nuclear weapons. They have maintained that by burning MOX fuel without reprocessing, the United States is focusing on ultimately eliminating plutonium, not creating more. According to DOE, controls will be placed on the program for fabricating MOX fuel. For example, the U.S. government would own and control the MOX fuel fabrication facility, which would be located at a DOE site. Furthermore, the facility would only be used for the disposition program, and no spent fuel would be reprocessed or recycled. DOE and White House Office of Science and Technology Policy officials stated that DOE’s MOX fuel program will not provide the United States any plutonium reprocessing capability that is not now readily available on the commercial market. In contrast, other government officials, a member of Congress and representatives from nongovernmental organizations, such as the Institute for Energy and Environmental Research, have indicated that DOE’s decision to pursue the MOX fuel option may pave the way for the future civilian use of plutonium in the United States. For example, they believe that the disposition program will provide experience in making and using MOX fuel that the United States does not now have. Others maintain that burning MOX fuel will establish a precedent that would serve to justify the future commercial use of plutonium. They also contend that the activities of the civilian nuclear industry have been kept separate from military activities to reduce the risk of nuclear proliferation and to encourage the rest of the world to maintain a similar standard. A November 1996 memorandum from the Director of the U.S. Arms Control and Disarmament Agency (ACDA) highlighted many of these proliferation concerns. According to the Director, (1) using MOX fuel would establish an infrastructure, at least in part, for the domestic civil use of plutonium; (2) employing both disposition technologies would undermine U.S. efforts to discourage the reprocessing of plutonium in other countries, such as South Korea and Russia; and (3) placing the two options on equal footing would be contrary to U.S. nonproliferation policy. Subsequently, ACDA’s Director acknowledged that reserving the right to use both the MOX fuel and immobilization options was consistent with U.S. policy. ACDA officials told us that their agency’s concerns had been significantly tempered because DOE’s final disposition plan, announced in January 1997, did not favor one disposition strategy over another. The officials noted, however, that ACDA still favored immobilizing the plutonium rather than burning MOX fuel for the United States because they believed it appeared to be less costly, quicker to implement, and left the plutonium as unlikely to be stolen or diverted as the MOX fuel option. A 1996 analysis prepared by an official from State Department’s Office of Nuclear Energy Affairs concluded that the use of weapons-grade plutonium in Russian nuclear reactors posed certain proliferation risks. The document noted that Western assistance would help create a MOX fuel industry that does not now exist and that Russia might otherwise be unable to build. The use of MOX fuel could provide Russia with the infrastructure to reprocess plutonium for both civilian and military purposes and thereby encourage a plutonium economy. According to DOE officials, however, Russia already has a significant reprocessing capability. The Administration’s position has been that (1) a MOX fuel fabrication facility constructed with international assistance in Russia should be used only for the disposition of weapons plutonium and (2) no spent MOX fuel should be reprocessed and recycled at least until all excess weapons plutonium has been processed. State Department officials said they want to preclude Russia’s increasing its stockpiles of plutonium as a by-product of converting military plutonium into more proliferation-resistant forms. They also said that Russia has not yet accepted the provision related to the future use of the MOX fuel facility and the reprocessing of spent nuclear fuel. Representatives from France, Belgium, and Canada told us their governments support the U.S. position. DOE’s plutonium disposition program faces uncertainties related to costs, licensing, regulatory and environmental issues, and the further development of disposition technologies. Furthermore, the U.S. program depends heavily on Russia’s adoption of a similar program that also faces many impediments. Given these uncertainties, DOE is pursuing its own plutonium disposition program, on a modest scale at this time, without Russia’s commitment to implement a similar program that proceeds along similar time frames. While the United States ultimately wants to reduce both countries’ stockpiles of plutonium to equivalent levels, it is unclear if the Russian government endorses this objective. Furthermore, it is uncertain if Russia—and the international community, including the United States—is willing to make the financial commitment to achieve these reductions in Russia over time. Because of the uncertainties about Russia’s commitment to implement a program similar to the U.S. program, the Congress may wish to consider linking DOE’s future funding requests for large-scale projects to design and construct plutonium disposition facilities in the United States and Russia to the progress being made in negotiating and signing a bilateral agreement. Furthermore, the Congress may wish to consider requesting that the Department of State, and other appropriate agencies, report periodically on efforts to conclude a plutonium disposition agreement between the United States and Russia. We provided copies of a draft of this report to the White House Office of Science and Technology Policy, the departments of Energy and State, and ACDA for review and comment. The Office of Science and Technology Policy provided its own comments and also obtained and consolidated comments from the other agencies. On December 17, 1997, we met with the office’s Assistant Director for National Security and DOE’s Assistant to the Director for International Programs, Office of Fissile Materials Disposition, to discuss their comments. In general, the agencies agreed with the facts and analysis presented and noted that our report correctly observed that there are uncertainties associated with both the U.S. and Russian plutonium disposition programs. The agencies also noted that MOX fuel technology is well established in Europe. We have expanded our discussion on MOX fuel technology to make it clear that while the technology is widely used in Europe it still has not yet been demonstrated on an industrial scale in the United States. The agencies reiterated that the U.S. government will not begin to commit large amounts of funds to either the U.S. or Russian plutonium disposition programs until Russia commits to a comparable program. Furthermore, they emphasized that both programs should be implemented in roughly parallel time frames. The agencies also provided us with additional clarifying information that we incorporated as appropriate. To address our objectives, we interviewed officials and obtained documents from the departments of State and Energy (and several national laboratories), ACDA, and the White House Office of Science and Technology Policy. We also obtained information from various foreign governments, commercial institutions, and international organizations, including the International Atomic Energy Agency and Russia’s Ministry of Atomic Energy. Our scope and methodology are discussed in detail in appendix III. We performed our review from February 1997 through December 1997 in accordance with generally accepted government auditing standards. Unless you publicly announce its content earlier, we plan no further distribution of this report until 5 days from the date of this letter. At that time, we will send copies of this report to other interested congressional committees, the Secretaries of State and Energy, the Assistant to the President for Science and Technology Policy (Office of Science and Technology Policy), the Director of ACDA, the Director of the Office of Management and Budget, and other interested parties. We will also make copies available to others upon request. Please contact me at (202)512-8021 if you have any questions. Major contributors to this report are listed in appendix IV. DOE’s programmatic environmental impact statement for plutonium disposition analyzes the disposition of about 50 metric tons of excess weapons-usable plutonium over the next 25 years. Included in that amount is 21.3 metric tons that can be traced to nuclear warheads that have been retired. Retirement refers to an administrative decision to remove the warheads from the nuclear weapons stockpile and to dismantle them. DOE and the Department of Defense conducted a joint review and determined that 21.3 metric tons of plutonium, most of which came from classes of warheads fully retired between 1970 and 1993, was excess to national needs. The other 28.7 metric tons in DOE’s analysis came from such other plutonium-bearing sources as components, metals, and oxides that were by-products from the production of nuclear weapons. Retirements of warheads have occurred for several reasons including treaties and weapons modernization efforts that supplant the need for some older or less reliable warheads. For example, a 1991 report of the Committee on Armed Services, U.S. House of Representatives, identified concerns about the W69, a warhead for air-launched missiles on bomber aircraft. The warhead did not have such modern design features as fire-resistant plutonium. The concern was that an accident involving the warhead could scatter plutonium over a wide area or, in the very worst and far less likely case, result in a nuclear explosion. Table 1.1 lists the fully retired classes of warheads that are sources of plutonium scheduled for disposition. In addition, a small portion of the 21.3 metric tons of excess plutonium comes from individual retired warheads among the current classes of warheads. Current warhead classes are listed in table 1.2. This appendix discusses six U.S-Russian plutonium disposition demonstration projects. These projects include burning MOX fuel in a Canadian reactor, fabricating MOX fuel pellets, validating computer codes for analyzing VVER-1000 reactors, studying the feasibility of converting a Russian reactor so that it can burn MOX fuel, studying ways to change plutonium from dismantled nuclear warheads into safer forms and to store it, and developing immobilization technologies. The purpose of this demonstration project is to examine the technical feasibility of burning weapons-grade plutonium in existing Canadian Deuterium Uranium (CANDU) reactors. The United States and Russia are studying the possibility of using these reactors for this purpose, but a substantial amount of analysis is required. These reactors, which use uranium fuel, may provide a technically attractive option because their design allows them to handle MOX fuel with fewer changes than would be expected with light water reactors. Studies have indicated that CANDU reactors could burn MOX fuel at a greater rate than U.S. reactors. Oak Ridge National Laboratory is coordinating the effort to test MOX fuel from the United States and Russia in a Canadian test reactor—the National Research Universal Reactor. The scope of the project involves fabrication, irradiation, and post-irradiation examination of a small number of MOX fuel rods over 18 months. Fuel rods are hollow metal tubes that contain fuel pellets. Los Alamos National Laboratory has fabricated seven fuel rods for use in the demonstration. Russia’s A. A. Bochvar All-Russia Scientific Research Institute is expected to fabricate another 8 to 10 fuel rods to combine with the U.S. fuel rods. As originally conceived in 1995, a total of 92 fuel rods—46 manufactured in the United States by Los Alamos National Laboratory and 46 fabricated in Russia—would be made for assembly in four fuel bundles. The test irradiations and post-irradiation examinations will be conducted at the Chalk River Laboratory in Canada. This trilateral effort will permit evaluation of such technical issues as possible differences between U.S. and Russian MOX fuel performance. DOE had planned to facilitate the signing of a contract between Atomic Energy of Canada Limited, the designer of the CANDU reactor, and the Bochvar Institute in July 1996. As part of that effort, DOE would pay for manufacturing the Russian fuel, transporting it to a Russian port, and for licensing oversight in Russia. The contract, however, was not signed then because of disagreements about the amount of money that would be provided to the Bochvar Institute to fabricate the MOX fuel, the intellectual ownership of the fabrication rights, the legal implications of transporting plutonium outside of Russia, and the possible imposition of Russian taxes on U.S.-funded assistance. The U.S. fuel has not been delivered to Canada because the United States was awaiting resolution of the disagreements concerning the Russian contract. In July 1997, Bochvar Institute officials indicated their agreement to the proposed contract. The signing occurred in September 1997, and the shipment is expected to be made sometime in calendar year 1998. DOE reported expenditures totaling $402,000 for this project as of July 31, 1997, and has planned $455,000 for continued work in fiscal year 1998. The purpose of this demonstration project is to assist and encourage Russia to (1) develop a MOX fuel fabrication process that is compatible with surplus weapons-grade plutonium, (2) test the resulting fuel, and (3) qualify it for use in a VVER-1000 reactor. The data and information collected in this task will be provided to Gosatomnadzor, Russia’s nuclear regulatory authority, and Rosenergoatom, the Russian utility that operates the nuclear power reactors, to facilitate the eventual licensing of MOX fuel in Russia. Oak Ridge National Laboratory is responsible for performing the work on behalf of DOE. In January 1997, a contract was signed by the University of Texas at Austin and the A.A. Bochvar All-Russia Research Institute, which established the statement of work, budget, schedule, and list of deliverables for the initial phase of work. Under the terms of this contract, the Bochvar Institute will receive $210,000 for various technical reports and for manufacturing a limited amount of test fuel related to the use of MOX fuel in VVER-1000 reactors. According to laboratory officials, the program to develop and test MOX fuel will be continued under separate contracts that will be signed with the appropriate Russian organizations. According to the Oak Ridge project manager, the project has made little progress because the Bochvar Institute has not prepared an acceptable plan to test the MOX fuel, has not provided a MOX fuel specification, and has limited ability to handle plutonium on site. The project manager said that the original Russian test plan did not contain the level of detail required to plan and execute the MOX fuel development program. The test plan is critical to the project because it outlines the goals, the time frames, and the estimated costs for manufacturing and testing MOX fuel in Russia. Laboratory officials noted that a contract has been placed with another Russian institute, the Research Institute of Atomic Reactors, to complement the current work and to perform the follow-on work that will require larger plutonium inventories. According to DOE, this institute should be capable of performing the required manufacturing work with limited equipment modifications and upgrades. Because the Bochvar Institute has been designated as the lead technical institute in Russia for all reactor fuel development, it will remain involved with the development program. The delay in the program and the reasons for it have been raised to higher levels within MINATOM without resolution. According to DOE, $443,000 had been spent on this project as of July 31, 1997, and DOE has planned $600,000 for continued work in fiscal year 1998. Having available verified and validated computer codes that have been used to predict the behavior of MOX fuel derived from weapons-grade plutonium is essential for nuclear regulatory organizations to complete their evaluations. This joint U.S.-Russian project is designed to begin the process of verifying and updating these computer codes that both U.S. and Russian regulators will need to license reactors to use MOX fuel. The verification process uses safety data that has been compiled by various international organizations and commercial organizations. Using the results of these verifications in Russia must have the concurrence of the original designer of the VVER-1000 reactor and the Russian institute responsible for the initial calculations of the reactor core’s physics. The United States will take similar verification actions once the type of U.S. reactor has been selected. In 1996, the University of Texas at Austin and the Russian Institute of Physics and Power Engineering entered into a $205,000 contract for which Russian authorities were required to provide various deliverables, including verification and validation studies in a form suitable for presentation to the Russian nuclear regulatory agency for licensing approval. Oak Ridge National Laboratory is responsible for coordinating this Russian work on behalf of DOE. Oak Ridge is also working with Russia’s Kurchatov Institute, Russia’s leading research and development institution in the field of nuclear energy, and the Institute of Physics and Power Engineering. This work is designed to assess the ability of Russian and U.S. computer codes to produce calculations on reactor physics that are consistent with experimental data and with the results produced by computer codes that are available in the international nuclear community. The results of the U.S. and Russian calculations will be evaluated with respect to how well the experimental results were predicted and the U.S. and Russian results will be compared. This process will provide an independent and parallel validation of the Russian models that may be acceptable to Russia’s nuclear regulatory authority. The initial phase of the work has been completed, and Oak Ridge officials indicated that they were pleased with the results. Follow-on work will be started in fiscal year 1998 and will be expanded to validate codes for rapidly changing and accident conditions. DOE reported expenditures totaling $912,000 for this project as of July 31, 1997, and has planned $700,000 for continued work in fiscal year 1998. DOE has agreed to help Russia assess the feasibility of converting Russia’s BN-600 reactor, a fast-neutron reactor, into a reactor suitable for burning weapons-grade plutonium. The BN-600 is a demonstration fast breeder reactor (one that produces more plutonium than it consumes) but operates on a fuel cycle that consumes uranium. When converted, the reactor may be used as a net consumer of weapons-grade plutonium. Studies indicated that the reactor would be capable, with modifications to the reactor core, of burning 100 percent MOX fuel. The BN-600 currently uses uranium oxide fuel. To proceed with the conversion plan, significant safety analyses is required. Oak Ridge National Laboratory is responsible for managing the project for DOE and providing technical support. Oak Ridge has enlisted the support of the Argonne National Laboratory and the Hanford Site to provide training and computer codes to selected Russian organizations, including the Institute of Physics and Power Engineering. Under the terms of a $100,000 contract between the University of Texas at Austin and the Institute, Russia is responsible for providing several deliverables, including design studies, safety analyses, and an economic analysis. According to DOE, $527,000 had been spent on the project as of July 31, 1997, and DOE has planned $800,000 for this project in fiscal year 1998. One of the critical objectives of the DOE-funded test and demonstration projects is selecting a technology to convert the plutonium weapons components from dismantled nuclear warheads into an oxide form that is suitable for temporary storage, international inspection, and disposition. Once this “front-end” process has been completed, the material can be used in MOX fuel and burned in a nuclear reactor to generate electricity. DOE, working with Los Alamos National Laboratory, is studying plutonium conversion technology as part of its own disposition plan. Los Alamos has also been tasked by DOE to lead a concurrent effort with Russia on plutonium conversion. Neither the United States nor Russia has selected the final conversion process. The goal of the project is to find areas where the United States and Russia can cooperate. In fiscal year 1997, Los Alamos received $2 million to begin a cooperative effort with Russia. DOE is placing significant resources in this program and plans to contribute $40 million to $80 million over the next 5 to 7 years for research and development and for the design and the construction of a pilot-scale plutonium conversion facility in Russia. According to DOE and Los Alamos officials, the project with Russia has been delayed. The Bochvar Institute, which will be leading and coordinating research on the project, would not sign any contracts for several months until an agreement between DOE and MINATOM was signed. One of the Los Alamos officials told us that the Institute wanted to have the internal political protection of this agreement before starting any work. In July 1997, however, the Deputy Minister of MINATOM instructed the Institute to proceed without the agreement in place. According to the Los Alamos official, another difficulty has been that the Bochvar Institute has requested extremely high labor rates, which have been unacceptable to DOE and have also delayed progress. The official, who described these matters as “growing pains” that are to be expected with such a program, believed that the pace of the project was beginning to accelerate as all of the different Russian organizations gained a better understanding of their roles and responsibilities. As of late August 1997, Los Alamos National Laboratory had signed two task orders with the Bochvar Institute totaling $200,000. The first task order, for $78,000, is to develop a master plan for the joint plutonium conversion and disposition project. The plan is expected to outline the steps for determining the optimum conversion process for plutonium metal into an oxide. In July 1997, the Institute submitted the draft plan for review and it is being revised; it is expected to be finalized in March 1998. As of August 1997, the first deliverable of the task order has been completed and payments totaling $23,200 had been made to the Institute. In late July 1997, the second task order, for $122,000, was signed to initiate tests and analyses that will lead to the design and development of a nondestructive system to disassemble Russia’s nuclear weapons. Under this task, Russia is responsible for preparing a design report and a technical demonstration report. According to a Los Alamos official, several additional task orders are being negotiated with the Bochvar Institute to initiate research on various conversion technologies. In addition, a broad feasibility study and design for the pilot demonstration conversion plant is also being developed as a near-term effort. According to DOE, $874,000 had been spent on the project as of July 31, 1997. DOE planned an additional $3,000,000 for this project in fiscal year 1998. DOE, working primarily through Lawrence Livermore National Laboratory—with support from the Savannah River Site and other laboratories—is engaged in projects with Russia to explore various immobilization technologies. As part of its dual-track approach to plutonium disposition, DOE is studying several options, including immobilization in glass or ceramics. DOE is funding small-scale demonstration projects to encourage Russia to consider the technical merits of immobilization as a disposition option and to gain insight into Russia’s immobilization technology. The Lawrence Livermore project manager told us that Russian views toward immobilization have generally not been very positive because they view plutonium as a valuable energy source. As a result, it has been difficult to obtain concurrence on some project’s goals and requirements. He noted, however, that attitudes appear to be changing somewhat in the past several months as dialogues between U.S.and Russian scientists have increased. For example, the July 1997 meeting of the U.S.-Russian Steering Committee in Moscow resulted in a protocol agreement to increase the dialogue by holding a focused U.S.-Russian experts workshop on plutonium stabilization and immobilization. The University of Texas at Austin is funding projects valued at $360,000 with two Russian institutes to perform immobilization tasks related to (1) establishing the migration of plutonium in hard rock formations in order to prepare for eventual siting, designing, and licensing of a geological repository and (2) providing tests and demonstrations to incorporate plutonium in glass using Russian technologies. One task, valued at $100,000, includes a technical exchange meeting at Lawrence Livermore National Laboratory, the purchase of equipment used to obtain samples of rock cores from a site in the Krasnoyarsk region of Siberia, and elevated pressure and temperature tests with plutonium in Russia. The second task, valued at $260,000, which began in January 1997, has been delayed. Under the terms of its contract, the United States is obligated to provide sample glass-fused material to the Bochvar Institute for testing. However, the release of the material was significantly delayed due to export control requirements. In the interim, U.S. requirements for the information changed and the information pertaining to unique Russian melter technology and for the Russian data on U.S. glass compositions will not be needed. Lawrence Livermore is currently working with the University of Texas to modify the contract for no extra cost and to extend the time frames. The proposed modification would be for studying Russian-selected glass compositions capable of containing high concentrations of plutonium using Russian technology. According to DOE, $863,000 had been spent on this project as of July 31, 1997. DOE has budgeted $1.1 million for continued work on this project in fiscal year 1998. To obtain information about plutonium disposition issues, we interviewed and obtained pertinent documents from officials at the Department of State, the U.S. Arms Control and Disarmament Agency, DOE, and the White House Office of Science and Technology Policy. We also met with the Deputy Minister of Russia’s Ministry of Atomic Energy (MINATOM), who is responsible for matters relating to plutonium disposition. In the course of our review, we also attended several forums that focused on plutonium disposition issues. We attended the Fourth International Policy Forum on the Management and Disposition of Nuclear Weapons Material (Lansdowne, Virginia) and two sessions sponsored by the Nuclear Energy Institute and the Nuclear Regulatory Commission on licensing issues related to the fabrication of MOX fuel. We also met with the chairman of the U.S. delegation to the U.S.-Russia Independent Scientific Commission on the Disposition of Excess Weapons Plutonium. Cost information was obtained primarily from DOE’s Office of Fissile Materials Disposition. We did not independently verify the accuracy of the cost data they provided. We obtained information on the status of various joint demonstration projects from DOE, Lawrence Livermore National Laboratory, Berkeley, California; Oak Ridge National Laboratory, Oak Ridge, Tennessee; Los Alamos National Laboratory, Los Alamos, New Mexico; and the Amarillo National Resource Center for Plutonium. We also met with representatives from Sandia National Laboratories (Rosslyn, Virginia office). To obtain information about the nonproliferation implications of DOE’s plutonium disposition program, we obtained the views of numerous governmental and nongovernmental organizations. Representatives from nongovernmental organizations included the Nuclear Energy Institute, the Natural Resources Defense Council, the Nuclear Control Institute, the Institute for Energy and Environmental Research, the Union of Concerned Scientists, Greenpeace, and the Nuclear Information Resource Service. We also obtained information from the International Atomic Energy Agency (Vienna, Austria), BNFL Inc., and COGEMA, Inc. We obtained the views of foreign governments on matters pertaining to plutonium disposition. We met with officials from the government of France, including the Ministry of Foreign Affairs and Atomic Energy Commission. We also obtained information from the governments of Belgium, Canada, and Germany. We attempted to obtain information from the governments of the United Kingdom and Ukraine via inquiries made through their embassies in Washington, D.C. Neither the United Kingdom nor Ukraine responded to our inquiries. To obtain information on U.S. nuclear weapons that are sources of plutonium for DOE’s disposition plan, we interviewed DOE officials who provided documents and discussed the types of plutonium for disposition and the amounts that would come from retired nuclear weapons. We also obtained additional information about particular types of weapons from two documents: Nuclear Weapons Databook: U.S. Nuclear Forces and Capabilities and U.S. Nuclear Weapons: The Secret History. These documents are considered to be authoritative, publicly available sources on the topic. The National Security Council declined to meet with us and stated that it did not possess any information that could not be obtained from other U.S. government agencies. Jackie A. Goff, Senior Attorney The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on: (1) the goals of the Department of Energy's (DOE) plutonium disposition program and the impediments facing its implementation; (2) U.S. government officials' views on the importance of a U.S.-Russian agreement on plutonium disposition and the status of efforts to negotiate an agreement; (3) the costs to implement plutonium disposition programs in the United States and Russia; (4) experts' views about the potential nonproliferation impacts of the U.S. plutonium disposition program; and (5) surplus nuclear weapons that are among the sources of plutonium for DOE's disposition plan. GAO noted that: (1) DOE's plutonium disposition program seeks to decrease the risk of nuclear proliferation by reducing U.S. plutonium stockpiles by about half over the next 25 years and by influencing Russia to take reciprocal actions, with the goal of reducing Russia's stockpiles to U.S. levels; (2) achieving these mutual reductions is a challenge because DOE's immobilization and mixed oxide fuel technologies have not yet been demonstrated on an industrial scale in the United States, and licensing, regulatory, and environmental issues will need to be addressed for both options; (3) the Russian plutonium stockpile is estimated to be about twice as large as the U.S. stockpile, and Russia may not have the financial resources to implement its program in a time frame comparable to that of the United States; (4) according to some U.S. executive branch officials, the success of the U.S. plutonium disposition program depends on Russia's implementing a similar program because a U.S.-only program could be seen as putting the United States at a strategic disadvantage and would not be supported by Congress or the international community; (5) executive branch officials told GAO that a plutonium disposition agreement between the United States and Russia should be negotiated before large-scale expenditures are made for U.S. plutonium disposition facilities; (6) no formal negotiations have begun to implement such an agreement; (7) DOE's preliminary estimates indicate that implementing the U.S. disposition program, which focuses on two technologies to convert plutonium to safer, more proliferant-resistant forms, could cost approximately $2.2 billion over the next 25 years; (8) the cost for a similar program in Russia could range between $1 billion and $2 billion, according to DOE's estimates; (9) U.S. assistance to Russia's program is expected to total between $40 million and $80 million over the next 5 to 7 years and includes plans to construct a pilot-scale plutonium conversion facility; (10) differing views exist about the potential nuclear nonproliferation impacts of DOE's plutonium disposition program and include: (a) a contention that DOE's consideration of burning plutonium in commercial nuclear reactors may pave the way for plutonium recycling and reverse a long-standing policy; and (b) a concern that an industry for mixed oxide fuel would be created in Russia that would increase opportunities for diversion and theft of nuclear materials; and (11) Department of State officials state that these and other issues will have to be addressed in a future binding agreement with Russia.
Within the past 6 months, millions of Medicare beneficiaries have been making important decisions about their prescription drug coverage and have needed access to information about the new Part D benefit to make appropriate choices. CMS faced a tremendous challenge in responding to this need and, within short time frames, developed a range of outreach and educational materials to inform beneficiaries and their advisers about Part D. To disseminate these materials, CMS largely added information to existing resources, including written documents, such as Medicare & You; the 1-800-MEDICARE help line; the Medicare Web site; and support for SHIPs. However, CMS has not ensured that its communications to beneficiaries and their advisers are provided in a manner that is consistently clear, complete, accurate, and usable. Six months have passed since these materials were first made available to beneficiaries, and their limitations could result in confusion among those seeking to make coverage decisions. Although the initial enrollment period for Part D will end on May 15, 2006, CMS will continue to play a pivotal role in providing beneficiaries with information about the drug benefit during the year and in subsequent enrollment periods. CMS has an opportunity to enhance its communications on the Part D benefit. This would allow beneficiaries and their advisers to be better prepared when deciding whether to enroll in the benefit, and if enrolling, which drug plan to choose. In order to improve the Part D benefit education and outreach materials that CMS provides to Medicare beneficiaries, we are recommending that the CMS Administrator take the following four actions: Ensure that CMS’s written documents describe the Part D benefit in a manner that is consistent with commonly recognized communications guidelines and that is responsive to the intended audience’s needs. Determine why CSRs frequently do not search for available drug plans if the caller does not provide personal identifying information. Monitor the accuracy and completeness of CSRs’ responses to callers’ inquiries and identify tools targeted to improve their performance in responding to questions concerning the Part D benefit, such as additional scripts and training. Improve the usability of the Part D portion of the Medicare Web site by refining Web-based tools, providing workable site navigation features and links, and making Web-based forms easier to use and correct. We received written comments on a draft of this report from CMS (see app. III). CMS said that it did not believe our findings presented a complete and accurate picture of its Part D communications activities. CMS discussed several concerns regarding our findings on its written documents and the 1-800-MEDICARE help line. However, CMS did not disagree with our findings regarding the Medicare Web site or the role of SHIPs. CMS also said that it supports the goals of our recommendations and is already taking steps to implement them, such as continually enhancing and refining its Web-based tools. CMS discussed concerns regarding the completeness and accuracy of our findings in terms of activities we did not examine, as well as those we did. CMS stated that our findings were not complete because our report did not examine all of the agency’s efforts to educate Medicare beneficiaries and specifically mentioned that we did not examine the broad array of communication tools it has made available, including the development of its network of grassroots partners throughout the country. We recognize that CMS has taken advantage of many vehicles to communicate with beneficiaries and their advisers. However, we focused our work on the four specific mechanisms that we believed would have the greatest impact on beneficiaries—written materials, the 1-800-MEDICARE help line, the Medicare Web site, and the SHIPs. In addition, CMS stated that our report is based on information from January and February 2006, and that it has undertaken a number of activities since then to address the problems we identified. Although we appreciate CMS’s efforts to improve its Part D communications to beneficiaries on an ongoing basis, we believe it is unlikely that the problems we identified in this report could have been corrected yet given their nature and scope. CMS raised two concerns with our examination of a sample of written materials. First, it criticized our use of readability tests to assess the clarity of the six sample documents we reviewed. For example, CMS said that common multisyllabic words would inappropriately inflate the reading level. However, we found that reading levels remained high after adjusting for 26 multisyllabic words a Medicare beneficiary would encounter, such as Social Security Administration. CMS also pointed out that some experts find such assessments to be misleading. Because we recognize that there is some controversy surrounding the use of reading levels, we included two additional assessments to supplement this readability analysis—the assessment of design and organization of the sample documents based on 60 commonly recognized communications guidelines and an examination of the usability of six sample documents, involving 11 beneficiaries and 5 advisers. Second, CMS expressed concern about our examination of the usability of the six sample documents. The participating beneficiaries and advisers were called on to perform 18 specified tasks, after reading the selected materials, including a section of the Medicare & You handbook. CMS suggested that the task asking beneficiaries and advisers to calculate their out-of-pocket drug costs was inappropriate because there are many other tools that can be used to more effectively compare costs. We do not disagree with CMS that there are a number of ways beneficiaries may complete this calculation; however, we nonetheless believe that it is important that beneficiaries be able to complete this task on the basis of reading Medicare & You, which, as CMS points out, is widely disseminated to beneficiaries, reaching all beneficiary households each year. In addition, CMS noted that it was not able to examine our detailed methodology regarding the clarity of written materials—including assessments performed by one of our contractors concerning readability and document design and organization. We plan to share this information with CMS, once our report has become public. Finally, CMS took issue with one aspect of our evaluation of the 1-800-MEDICARE help line. Specifically, CMS said the 41 percent accuracy rate associated with one of the five questions we asked was misleading, because, according to CMS, we failed to analyze 35 of the 100 responses. However, we disagree. This question addressed which drug plan would cost the least for a beneficiary with certain specified prescription drug needs. We analyzed these 35 responses to this question and found the responses to be inappropriate. The CSRs would not provide us with the information we were seeking because we did not supply personal identifying information, such as the beneficiary’s Medicare number or date of birth. We considered such responses inappropriate because the CSRs could have answered this question without personal identifying information by using CMS’s Web-based prescription drug plan finder tool. Although CMS said that it has emphasized to CSRs, through training and broadcast messages, that it is permissible to provide the information we requested without requiring information that would personally identify a beneficiary, in these 35 instances, the CSR simply told us that our question could not be answered. CMS also said that the bulk of these inappropriate responses were related to our request that the CSR use only brand-name drugs. This is incorrect—none of these 35 responses were considered incorrect or inappropriate because of a request that the CSR use only brand-name drugs—as that was not part of our question. As arranged with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days after its date. At that time, we will send copies of this report to the Secretary of Health and Human Services, the Administrator of the Centers for Medicare & Medicaid Services, and other interested parties. We will also make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (312) 220-7600 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Medicare & You (Section 6: Medicare Prescription Drug Coverage) Dual-eligible beneficiaries are Medicare beneficiaries who receive full Medicaid benefits for services not covered by Medicare. Medicare Advantage replaces the Medicare+Choice managed care program and expands the availability of private health plan options to Medicare beneficiaries. Do You Have a Medigap Policywith Prescription Drug Coverage? sold by private insurers to help pay for Medicare cost- sharing requirements, as well as for some services not provided by Medicare. tested the usability of sample documents with 16 participants—11 Medicare beneficiaries, including 1 disabled beneficiary who was under 65, and 5 advisers to beneficiaries. Everyone was asked to perform 18 specified tasks related to enrollment, coverage, costs, penalty, and informational resources. They were also asked to provide feedback about their experiences. Although the size of the group was small, research shows that as few as 5 individuals can provide meaningful insights into common problems. consistency with laws, regulations, and CMS guidance. benefit are written at a reading level that is difficult for many seniors. Reading levels for the sample documents were challenging for at least the 40 percent of seniors, who read at or below the 5th grade level. Reading level estimates for the sample texts1 ranged from 7th grade to postcollege level. Reading levels remain challenging for at least 40 percent of seniors even after adjusting for 26 multisyllabic words, such as Medicare, Medicare Advantage, and Social Security Administration. After the adjustment, the estimated reading level ranged from 8th to 12th grade. Estimates have a likely margin of error of ± two grades. documents demonstrated adherence to about half of the 60 commonly recognized written communications guidelines, on average. Desirable features: The documents were written with a respectful and polite tone, were free of clichés and slang, contained useful contact information, included concise and descriptive headings, and generally followed graphic and formatting guidelines. Undesirable features: The documents used too much technical jargon, often did not define difficult terms, included sentences and some paragraphs that were too long, did not use sufficient summaries to assist the reader in identifying key points. were frustrated by the documents’ lack of clarity and often could not complete the 18 assigned tasks. One of the 18 assigned tasks was completed by all beneficiaries and advisers. Eleven of the 18 assigned tasks were completed by at least half of the beneficiaries and advisers. Four of the 18 assigned tasks were completed by 2 or fewer of the 11 beneficiaries. Nine of the 18 assigned tasks, were completed by 2 or fewer of the 5 advisers. computing projected total out-of-pocket costs for a plan that provided Part D’s standard coverage (successfully completed by none of the 11 beneficiaries and 2 of the 5 advisers), evaluating whether it was possible to enroll in Medicare Part D and keep drug coverage from a retiree health plan (successfully completed by 2 beneficiaries and 2 advisers), and determining the course of action for dual-eligibles who are automatically enrolled in a plan that does not cover all drugs used (successfully completed by 4 beneficiaries and 1 adviser). to follow. Participants struggled with technical terms, such as “classes of commonly prescribed drugs” and “formulary,” which is a list of drugs covered by a plan. Even when most participants were able to complete the tasks, they expressed confusion and frustration. relevant contact information, which could aid in identifying next steps for coverage decisions. All documents reviewed provided the dates of the start of initial program enrollment and coverage. cumulative effect of the penalty for missing the initial enrollment deadline. accurate and that the text was consistent with MMA, implementing regulations, and agency guidance. However, we noted a few misleading statements in Medicare & You. The document implied that if a beneficiary’s doctor applied for an exception it would be granted, whereas exceptions to the formulary are granted at each plan sponsor’s discretion. The document outlined the minimum requirements for standard coverage by Part D plans. However, it did not indicate that few plans offer this exact coverage and that beneficiaries should be prepared to compare plans with varying premiums, co-payments, and covered drugs to choose plans that best suit them. our five questions, we used three resources: the prescription drug finder tool on the Medicare Web site, the 1-800-MEDICARE scripts prepared by CMS and contractors for CSRs to use in responding to callers’ questions, and input from CMS officials on the criteria we used to evaluate responses. To evaluate the promptness of the help line in answering calls, we recorded the length of time it took to connect to a CSR for each call. An accurate and complete response would identify the prescription drug plan that has the lowest estimated annual cost for the drugs the beneficiary uses. 2. Can a beneficiary who is in a nursing home and not on Medicaid sign up for a prescription drug plan? An accurate and complete response would indicate that such a beneficiary can choose whether to enroll in a Medicare prescription drug plan. An accurate and complete response would inform the caller that enrolling for the prescription drug benefit would depend on whether the beneficiary’s Medigap plan was creditable—that is, whether the coverage it provided was at least as good as Medicare’s standard prescription drug coverage—or noncreditable. The CSR response would also mention that the beneficiary’s Medigap plan should have sent him/her information that outlines options. An accurate and complete response would indicate that a beneficiary has two options: (1) keep current health plan and join the prescription drug plan later with a penalty; or (2) drop current coverage and join a Medicare drug plan. 5. How do I know if a beneficiary qualifies for extra help? An accurate and complete response would refer the beneficiary to the Social Security Administration. completeness of responses to our five questions varied significantly, from 41 percent to 90 percent. CSRs accurately and completely answered question 5 (whether a beneficiary qualifies for extra help), which had a specific script, 90 percent of the time. CSRs accurately and completely answered question 2 (whether a beneficiary in a nursing home, who was not on Medicaid, could sign up for the drug benefit) 79 percent of the time—even though there was no specific script for the question. CSRs’ responses for question 3 (whether a beneficiary with a Medigap policy could enroll in the drug benefit) were accurate and complete 66 percent of the time. Many of the responses were inaccurate because they did not provide adequate information about creditable and noncreditable coverage. The accuracy and completeness rate for question 4 (about retiree health insurance) was 58 percent. Many of the responses were inaccurate because the CSRs did not follow the available script or provide sufficient information about the implications of the beneficiary's decision. CSRs’ responses to question 1 (which requires CSRs to use the prescription drug plan finder Web tool) were accurate and complete less than 50 percent of the time. The rate is largely caused by CSRs’ inappropriate responses—35 out of 100 times— that they were unable to answer the question without personal identifying information, such as the beneficiary’s Medicare number or date of birth. inadvertently disconnected the call (19 calls). Intentional disconnections were programmed by the telephone company when wait times were projected to exceed 20 minutes (3 calls). The prescription drug plan finder Web tool used by CSRs was not operative at the time of our call (1 call). significantly, ranging from no wait to more than 55 minutes. About 75 percent of calls were connected in less than 5 minutes. For calls where we waited more than 5 minutes to speak to a CSR, the wait time ranged from 5 minutes to over 55 minutes. Sixty-two calls were on hold from 5 to 14 minutes, 59 seconds. Thirty-nine calls were on hold from 15 to 24 minutes, 59 seconds. Twenty-five calls were on hold 25 minutes or more. For both intentional and unintentional disconnections, we often waited more than 5 minutes before the disconnection occurred. In one case, we were placed on hold for 54 minutes before being disconnected. make information and services fully available to individuals with disabilities. Our review included an examination of CMS’s March 2006 report assessing the compliance of its Medicare Web site with this federal requirement and discussions with CMS officials. NN/g performed the following three separate evaluations: Evaluation one: NN/g calculated an overall score of the site’s usability, to reflect the ease of finding necessary information and performing various tasks. For this calculation, NN/g considered various factors, such as site navigation, customer support, and presentation of online forms. Evaluation two: NN/g evaluated in detail the usability of 137 detailed aspects of the Part D benefit portion of the Web site. Topics included Web design (e.g., home page, navigation, search function, graphics, and organization); tools (e.g., plan finder); writing style (e.g., tone, content, legibility, and readability); accessibility (e.g., availability of site version for the blind); and languages (e.g., links for users who have difficulty reading English). Evaluation three: NN/g conducted a total of 34 user tests to determine the ease of performing a variety of Web-related tasks, such as browsing the site, making a change in address, finding plan information under certain scenarios, comparing Medigap and Part D drug coverage, and determining how to join a plan. NN/g asked five Medicare beneficiaries—who were not disabled—and two advisers to beneficiaries to perform one or more user tests each using the Web site. At the end of the user tests, the seven participants were asked to provide feedback about their experiences. information to assist navigation was often not helpful—for example, text labels associated with links were not always functioning; and the writing style presented some challenges—for example, material was written at the 11th grade level. For evaluation three, the 34 user tests showed that the site was a challenge for the seven participants to use. For example: For 12 of the 34 tests, participants’ initial reactions were that they would not be able to complete the tests and wanted to quit trying. On average, participants were able to proceed slightly more than halfway through each of the 34 tests. When asked for feedback on their experience with using the site, the seven participants, on average, indicated high frustration levels and low satisfaction. showed that two requirements were not met: The plan finder did not provide alternative text for all images—that is, there was no text for the screen reader to read. Therefore, images could not be translated into spoken words for the visually impaired. The plan finder did not allow screen readers to recognize form fields and translate forms into spoken words. As a result, visually impaired users would not have been able to complete Web-based forms. A CMS official told us that the agency made the necessary corrections on April 20, 2006, but we did not verify that these corrections were made. A SHIP grant year begins on April 1 of the year the funds become available. help line to SHIPs has increased significantly. The monthly average of number of calls referred to SHIPs increased from 16,000 referrals for May through September 2005 to approximately 43,000 for October and November 2005, the months around the time when enrollment in the Part D benefit began. According to CMS officials, this increased demand was influenced by callers seeking advice on choosing a drug plan. Unlike CSRs on the help line, SHIP counselors can offer individualized guidance to callers. to about 35,000 served in all of 2005. Florida, mostly during November and December of 2005, held at least six “phone bank” events—where SHIP counselors were available to take calls on the Part D benefit during live newscasts. Florida plans to hold two additional phone banks as the May 15 enrollment deadline approaches. New York reported nearly doubling its formal training sessions for SHIP counselors in 2005, to prepare them for the demand for services related to the Part D benefit. Texas counseled 45,719 clients and conducted 523 outreach events from November 15, 2005—the official start of the enrollment period—to March 22, 2006. Pennsylvania held over 3,000 enrollment events, which were attended by more than 130,000 people, from May 2005 to February 28, 2006. meetings with its regional offices, which interact directly with SHIP offices, to gauge SHIPs’ ability to meet the demands of beneficiaries. In this report, we assessed (1) the extent to which the Centers for Medicare & Medicaid Services’ (CMS) written documents describe the Medicare Part D prescription drug benefit in a clear, complete, and accurate manner; (2) the effectiveness of CMS’s 1-800-MEDICARE help line in providing accurate, complete, and prompt responses to callers inquiring about the Part D benefit; (3) whether CMS’s Medicare Web site presents information on the Part D benefit in a usable manner; and (4) how CMS has used State Health Insurance Assistance Programs (SHIP) to respond to the needs of Medicare beneficiaries for information on the Part D benefit. To obtain information on CMS’s efforts to educate beneficiaries about Part D, we interviewed agency officials responsible for Part D written documents, the 1-800-MEDICARE help line, the Medicare Web site, and SHIPs. Following our briefing of congressional staff on April 19, 2006, the briefing slides were updated to reflect CMS’s reported correction to the Medicare Web site to comply with section 508 of the Rehabilitation Act of 1973. We determined that the data used were sufficiently reliable for the purposes of this report. To assess the clarity, completeness, and accuracy of written documents, we compiled a list of all available CMS-issued Part D benefit publications intended to inform beneficiaries and their advisers and selected a sample of 6 from the 70 CMS documents available, as of December 7, 2005, for in- depth review, as shown in table 1. The sample Part D documents were chosen to represent a variety of publication types, such as frequently asked questions and fact sheets available to beneficiaries about the Part D drug benefit. We selected documents that targeted all beneficiaries or those with unique drug coverage concerns, such as dual-eligibles and beneficiaries with Medigap. To evaluate clarity, we contracted with the American Institutes for Research (AIR)—a firm with experience in evaluating written material. AIR evaluated the texts of the six sample documents using three methodologies: 1. three standard readability tests; 2. 60 commonly recognized written communications guidelines, including practices to aid senior readers; and 3. user testing with 11 Medicare beneficiaries and 5 advisers to beneficiaries, who performed 18 specified tasks related to enrollment, coverage, cost, penalty, and information resources and provided feedback about their experiences. We reviewed the sample documents for completeness to determine whether they contained sufficient information to allow the beneficiaries to identify (1) their next steps in determining whether to enroll and what plan to choose and (2) important factors, such as penalty provisions, that could affect their coverage decisions. To identify those important factors associated with the Part D benefit, we reviewed relevant laws, regulations, and 1-800-MEDICARE scripts prepared for customer service representatives (CSR) to read to callers and obtained information from advocacy groups. To evaluate the accuracy of information, we reviewed the sample materials for compliance with laws, regulations, and CMS guidance. To determine the accuracy and completeness of information provided regarding the Part D benefit, we placed a total of 500 calls to the 1-800- MEDICARE help line. We posed one of five questions about Part D in each call, so that each question was asked 100 times. Each question was pretested before we finalized its wording. We randomly placed calls at different times of the day and different days of the week from January 17 to February 7, 2006. Our calling times were chosen to match the daily and hourly pattern of calls reported by 1-800-MEDICARE in October 2005. We informed CMS officials that we would be placing calls; however, we did not tell them the questions we would ask or the specific dates and times that we would be placing our calls. To select the five questions, we considered topics identified in the Medicare Web site’s frequently asked questions. In addition, we considered topics most frequently addressed by 1-800-MEDICARE CSRs based on help line reports. To evaluate the accuracy of CSRs’ responses to our five questions, we used three resources: (1) the prescription drug plan finder tool on the Medicare Web site, (2) 1-800-MEDICARE scripts, and (3) input obtained from CMS officials on the criteria we used for evaluating CSR responses. Table 2 lists the questions we asked and the criteria we used to evaluate the accuracy of responses. When placing our calls, we identified ourselves as a beneficiary’s relative, but did not provide CSRs with specific identifying information, such as a Medicare beneficiary number or date of birth. During our calls, CSRs were not aware that their responses would be included in a research study. We recorded the length of each call, including wait times, and the time it took before being connected to a CSR. We evaluated the accuracy and completeness of the responses by CSRs to the 500 calls by determining whether key information was provided. The results from our 500 calls are limited to those calls and are not generalizable to the universe of calls made to the help line. The questions we asked were limited to matters concerning the Part D benefit and do not encompass all of the questions callers might ask. We contracted with the Nielsen Norman Group (NN/g)—a firm with expertise in Web design—to assess the usability of the Part D information available on the Medicare Web site. This study consisted of three separate evaluations. First, NN/g compared the site’s compliance with established usability guidelines to determine a usability score to reflect the ease of finding necessary information and performing various tasks. Specifically, to determine the usability scores, NN/g evaluated various aspects of the Web site using industry-recognized “good” Web design practices, as indicated by the contractor, and the collective body of knowledge from NN/g internal reports and experts, or NN/g usability guidelines. Second, NN/g determined the degree of difficulty associated with 137 detailed aspects of Web site design for the Part D portion of the site. The 137 aspects fall into the following general categories: overall Web design (e.g., home page, navigation, search function, graphics, and overall organization); tools (e.g., plan finder); writing style (e.g., content, tone, legibility, and readability); accessibility (e.g., availability of a version of the Web site for the blind); and languages (e.g., availability of languages other than English). NN/g determined the difficulty level in using each of the 137 aspects. NN/g noted aspects that had good design and would not be expected to cause confusion. For those aspects with a design that would be expected to cause confusion, NN/g ranked the associated difficulty level as high, medium, or low. Third, NN/g performed a qualitative evaluation on January 20 and 23, 2006, to test the ability of five Medicare beneficiaries and two beneficiary advisers to perform specified tasks related to Medicare beneficiaries using the Web site and to obtain feedback about participants’ experiences. While the results are not statistically valid, these users provided important insights into the usability of the Medicare Web site. Participants were asked to “think out loud” as they worked through their tasks, while an NN/g facilitator observed their behavior and took notes. NN/g gave each task a score. At the end of their sessions, NN/g asked participants for input regarding their confidence in the answers they obtained from the Web site, and their overall satisfaction and frustration levels associated with using the site. Finally, we obtained the results of CMS’s March 2006 review of its Web site’s compliance with section 508 of the Rehabilitation Act of 1973, as amended. This law requires federal agencies to make the information on their Web sites accessible to people with disabilities. We also discussed the results of this review with agency officials and followed up with them to determine the status of CMS’s corrective actions. To determine the role of SHIPs in helping Medicare beneficiaries understand Part D, we interviewed CMS officials who monitor SHIPs’ activities. We also reviewed information that we obtained from CMS officials and other sources on the program, its funding, changes made in response to the introduction of Part D, and the impact of Part D on the demand for SHIP services. In addition, we interviewed SHIP officials in California, Florida, New York, Texas, and Pennsylvania—the five states with the largest Medicare populations—to obtain information on the experience of their SHIPs with Part D. We conducted our work from November 2005 through May 2006 in accordance with generally accepted government auditing standards. In addition to the contact named above, Susan T. Anthony and Geraldine Redican-Bigott, Assistant Directors; Ramsey L. Asaly; Enchelle Bolden; Laura Brogan; Shaunessye D. Curry; Chir-Jen Huang; M. Peter Juang; Ba Lin; Michaela M. Monaghan; Roseanne Price; Pauline Seretakis; Margaret J. Weber; and Craig H. Winslow made contributions to this report.
On January 1, 2006, Medicare began providing coverage for outpatient prescription drugs through its new Part D benefit. Beneficiaries who enroll in Part D may choose a drug plan from those offered by private plan sponsors under contract to the Centers for Medicare & Medicaid Services (CMS), which administers the Part D benefit. Beneficiaries have until May 15, 2006, to enroll in the Part D benefit and select a plan without the risk of penalties. GAO was asked to review the quality of CMS's communications on the Part D benefit. GAO examined 70 CMS publications to select 6 documents for review and contracted with the American Institutes for Research to evaluate the clarity of these texts; made 500 calls to the 1-800-MEDICARE help line; and contracted with the Nielsen Norman Group to evaluate the usability of the Medicare Web site. The information given in the six sample documents that GAO reviewed describing the Part D benefit was largely complete and accurate, although this information lacked clarity. The documents were unclear in two ways. First, although about 40 percent of seniors read at or below the fifth-grade level, the reading levels of these documents ranged from seventh grade to postcollege. Second, on average, the six documents did not comply with about half of 60 common guidelines for good communication. For example, the documents used too much technical jargon and often did not define difficult terms, such as formulary. Moreover, 16 beneficiaries and advisers that GAO tested reported frustration with the documents' lack of clarity and had difficulty completing the tasks assigned to them. Although the documents lacked clarity, they informed readers of enrollment steps and factors affecting coverage decisions and were consistent with laws, regulations, and agency guidance. Customer service representatives (CSR) responded to the 500 calls GAO placed to CMS's 1-800-MEDICARE help line accurately and completely about two-thirds of the time. Of the remainder, 18 percent of the calls received inaccurate responses, 8 percent of the responses were inappropriate given the question asked, and about 3 percent received incomplete responses. In addition, about 5 percent of GAO's calls were not answered, primarily because of disconnections. Accuracy and completeness rates of CSRs' responses varied significantly across the five questions GAO asked. For example, while CSRs provided accurate and complete responses to calls about beneficiaries' eligibility for extra help 90 percent of the time, the accuracy rate for calls concerning the drug plan that would cost the least for a specified beneficiary was 41 percent. For this question, the CSRs responded inappropriately for 35 percent of the calls by explaining that they could not identify the least costly plan without the beneficiary's personal information--even though CSRs had the information needed to answer the question. The time GAO callers waited to speak with CSRs also varied, ranging from no wait time to over 55 minutes. For 75 percent of the calls--374 of the 500--the wait was less than 5 minutes. The Part D benefit portion of the Medicare Web site can be difficult to use. GAO's test of the site's overall usability--the ease of finding needed information and performing various tasks--resulted in scores of 47 percent for seniors and 53 percent for younger adults, out of a possible 100 percent. While there is no widely accepted benchmark for usability, these scores indicate that using the site can be difficult. For example, the prescription drug plan finder was complicated to use and some of its key functions, such as "continue" and "choose a drug plan," were often not visible on the page without scrolling down.
FFRDCs were first established during World War II to meet specialized or unique research and development needs that could not be readily satisfied by government personnel or private contractors. Additional and expanded requirements for specialized services led to increases not only in the size of the FFRDCs but also the number of FFRDCs, which peaked at 74 in 1969. Today, 8 agencies, including DOD, fund 39 FFRDCs that are operated by universities, nonprofit organizations, or private firms under long-term contracts. Federal policy allows agencies to award these contracts noncompetitively. The Office of Federal Procurement Policy within the Office of Management and Budget (OMB) establishes governmentwide policy on the use and management of FFRDCs. Within DOD, the Director of Defense Research and Engineering is responsible for developing overall policy for DOD’s 11 FFRDCs. The Director communicates DOD policy and detailed implementing guidance to FFRDC sponsors through a periodically updated management plan, and determines the funding level for each FFRDC based on the overall congressional ceiling on FFRDC funding and FFRDC requirements. Total funding for DOD’s FFRDCs was $1.25 billion in fiscal year 1995. DOD categorizes each of its FFRDCs as a systems engineering and integration center, a studies and analyses center, or a research and development laboratory. Appendix II provides information on each FFRDC, including its parent organization, primary sponsor, DOD funding, and staffing levels for fiscal year 1995. The military services and defense agencies sponsor individual FFRDCs and award and administer the 5-year contracts, typically negotiated noncompetitively, after reviewing the continued need for the FFRDC. Unlike a private contractor, an FFRDC accepts restrictions on its ability to manufacture products and compete for other government or commercial business. These restrictions are intended to (1) limit the potential for conflicts of interest when FFRDC staff have access to sensitive government or contractor data and (2) allow the center to form a special or strategic relationship with its DOD sponsor. Management fees are discretionary funds provided to FFRDCs in addition to reimbursement for incurred costs, and these fees are similar to profits private contractors earn. Two issues that have remained unresolved for many years are what should management fee be provided for and how should FFRDCs use this fee. As far back as 1969, we concluded that nonprofit organizations such as FFRDCs incur some necessary costs that may not be reimbursed under the procurement regulations, and we recommended that the Bureau of the Budget (now known as OMB), develop guidance that specified the costs contracting officers should provide fees to cover. In 1993, the Office of Federal Procurement Policy agreed that governmentwide guidance on management fees for nonprofit organizations was needed, but it has not yet issued detailed guidance. In the absence of such governmentwide guidance, recurring questions continue to be raised about how FFRDCs use their fees. In its 1994 report, for example, the DOD Inspector General concluded that FFRDCs used $43 million of the $46.9 million in fiscal year 1992 DOD fees for items that should not have been funded from fees. The bulk of this $43 million funded independent research projects that should have been charged to overhead, according to the report. The remainder funded otherwise unallowable costs and future requirements, which the report concluded were not necessary for FFRDC operations. Similarly, as we recently reported, DCAA reviewed fiscal year 1993 fee expenditures at the MITRE Corporation and concluded that just 11 percent of the expenditures reviewed were ordinary and necessary to the operation of the FFRDC. DCAA reported that MITRE used fees to pay for items such as lavish entertainment, personal expenses for company officers, and generous employee benefits. In our recent work at The Aerospace Corporation, we found that the corporation used about $11.5 million of its $15.5 million management fee for sponsored research. Aerospace used the remainder of its fee and other corporate resources for capital equipment purchases; real and leasehold property improvements; and other unreimbursed expenditures, such as contributions, personal use of company cars, conference meals, trustee expenses, and new business development expenses. DOD’s action plan recommended implementation of revised guidelines for management fee. Specifically, it recommended (1) moving allowable costs out of fee and reducing fee accordingly, and (2) establishing consistent policies on ordinary and necessary costs to be funded through fee. If effectively implemented, these actions should help to resolve many of the long-standing concerns over FFRDC use of management fee. Moving FFRDC-sponsored research out of fee would result in a substantial reduction of fee amount and should provide for more effective DOD oversight of FFRDC expenditures. This action would also subject all research to the Federal Acquisition Regulation cost principles applicable to cost-reimbursable items. Defining ordinary and necessary expenses which may be covered by fee is a more challenging issue, which may explain why the issue has gone unresolved for so long. However, until DOD issues specific guidance regarding ordinary and necessary expenses, debate will likely continue on whether fee can be used for such things as personal expenses for company officers, entertainment, and new business development. Although DOD’s action plan identifies the need for clarifying guidance, our understanding is that such guidance has not been issued. As a robust private-sector professional services industry grew to meet the demand for technical services, it became apparent that industry had the capability to perform some tasks assigned to FFRDCs. As early as 1962, the Bell Report noted criticism that nonprofit systems engineering contractors had undertaken work traditionally done by private firms. A 1971 DOD report stated, “It is pointless to say that the [systems engineering FFRDCs’] function could not be provided by another instrumentality....” According to this report, private contractors could also do the same type of work as the studies and analyses FFRDCs. The report pointed to the flexibility of using the centers and their broad experience with sponsors’ problems as reasons for continuing their use. More recently, the DOD Inspector General concluded that FFRDC mission statements did not identify unique capabilities or expertise, resulting in FFRDCs being assigned work without adequate justification. In a 1988 report, we pointed out that governmentwide policy did not require that FFRDCs be limited to work that industry could not do; FFRDCs could also undertake tasks they could perform more effectively than industry. FFRDCs are effective, we observed, partly because of their special relationship with their sponsoring agency. This special relationship embodies elements of access and privilege as well as constraints to limit their activities to those DOD deems appropriate. In 1995, the DSB and DOD’s Action Plan elaborated on and refined the concept of the FFRDC special relationship. According to DOD, FFRDCs perform tasks that require a special or strategic relationship to exist between the task sponsor and the organization performing the task. Table 1 shows DOD’s description of the characteristics of this special relationship. According to the DSB, this special relationship allows an FFRDC to perform research, development, and analytical tasks that are integral to the mission and operation of the DOD sponsor. The DSB and an internal DOD advisory group concluded that there is a continuing need for certain core work that requires the special relationship previously described. DOD concluded that giving such tasks to private contractors would raise numerous concerns, including questions about potential conflicts of interest. Accordingly, DOD has defined an FFRDC’s core work as tasks that (1) are consistent with the FFRDC’s purpose, mission, capabilities, and core competencies and (2) require the FFRDC’s special relationship with its sponsor. The DOD advisory group estimated that this core work represented about 6 percent of DOD’s research, development, and analytic effort. The DSB and the DOD advisory group also concluded that FFRDCs performed some noncore work that did not require a special relationship, and they concluded that this work should be transitioned out of the FFRDCs and acquired competitively. On the basis of these conclusions, DOD directed each sponsor to review its FFRDC’s core competencies, identify and prioritize the FFRDC’s core work, and identify the noncore work that should be transitioned out of the FFRDC. The core competencies the DOD sponsors identified appear to differ little from the scope of work descriptions that were in place previously. In several cases, sponsors seem to have simply restated the functions listed in an FFRDC’s scope of work description. In other cases, the core competencies summarized the scope of work functions into more generic categories. In February 1996, the Under Secretary for Defense (Acquisition and Technology) reported that DOD sponsors had identified $43 million, or about 4 percent of FFRDC funding, in noncore work being performed at the FFRDCs. According to the Under Secretary, ongoing noncore work is currently being transferred out of the FFRDCs. Even though DOD states that it is important to ensure that tasks assigned to the FFRDC meet the core work criteria, we believe it will continue to be difficult to determine whether a task meets these criteria. FFRDC mission statements remain broad, and core competencies appear to differ little from the previous scope of work descriptions. As we stated in our 1988 report, the special relationship is the key to determining whether work is appropriate for an FFRDC. However, determining whether one or more of the characteristics of the special relationship is required for a task may be difficult, since the need for an element of the special relationship is normally relative rather than absolute. For example, we believe DOD would expect objectivity in any research effort, but it may be difficult to demonstrate that a particular task requires the special degree of objectivity an FFRDC is believed to provide. Uncertainty about whether an FFRDC’s special relationship allows it to perform a task more effectively than other organizations also accompanies decisions to assign work to an FFRDC. In our 1988 report, we stated that full and open competition between all relevant organizations (FFRDCs and non-FFRDCs) could provide DOD assurance that it has selected the most effective source for the work. However, exposing FFRDCs to marketplace competition would fundamentally alter the character of the special relationship. While DOD has initiated a department-wide effort to define more clearly the work its FFRDCs will perform, the criteria DOD has developed remains somewhat general. Applying this criteria requires the making of judgements about the relative effectiveness of various sources for work in the absence of full information on capabilities which open competition would provide. It is doubtful that DOD’s criteria will be satisfactory to those critics who are seeking a simple and unambiguous definition of work appropriate for FFRDCs. The question of whether accepting work from organizations other than its sponsor impairs an FFRDC’s ability to provide objective advice has long been discussed. As early as 1962, the Bell Report raised this question but noted that no clear consensus had developed as to whether concerns about diversification were well founded. The report recognized that studies and analyses FFRDCs could effectively serve multiple clients but concluded that systems engineering organizations were primarily of value when they served a single client. During the early 1970s, DOD encouraged its FFRDCs to diversify into nonsponsor work. According to a 1976 DOD report, FFRDCs that did not diversify suffered efficiency and morale problems as their organizations shrank in the face of declining DOD research and development budgets. Nonetheless, this report recommended that the systems engineering FFRDCs limit themselves to DOD work and adjust their work forces in line with changes in the DOD budget. Regarding the MITRE Corporation, the report recommended that MITRE create a separate affiliate organization to carry out its nonDOD work. In 1994, Congress raised the issue that non-FFRDC affiliate organizations resulted in “...an ambiguous legal, regulatory, organizational, and financial situation,” and directed that DOD prepare a report on non-FFRDC activities. More recently, however, the DSB concluded that FFRDCs and their parent companies should be allowed to accept work outside the core domain only when doing so was in the best interests of the country; the DSB did not propose criteria for determining when accepting nonsponsor work was in the country’s best interests. Acceptance of nonsponsor work is now common at DOD’s FFRDCs. Except for the Institute for Defense Analyses, each parent organization performs some non-DOD work either within the FFRDC or through an affiliate organization created to pursue non-FFRDC work. Currently, six of the eight parent organizations that operate FFRDCs also operate one or more non-FFRDC affiliates. Some of these affiliates are quite small: the Center for Naval Analyses Corporation’s Institute for Public Research accounts for about 3 percent of the center’s total effort. Other affiliates are more significant: the MITRE Corporation’s two non-FFRDC affiliates accounted for about 11 percent of MITRE’s total effort, and the RAND Corporation’s 5 non-FFRDC divisions account for about 32 percent of its total effort. The Massachusetts Institute of Technology and Carnegie-Mellon University—parent organizations of the MIT Lincoln Laboratory and the Software Engineering Institute, respectively—each pursue a diverse range of non-FFRDC activities. DOD has recently become more active in seeking to oversee work its FFRDCs perform through non-FFRDC divisions. DOD sponsors have historically had the opportunity to oversee nonsponsor work performed within the FFRDC because the work is carried out under the FFRDC contracts that sponsors administer. This contract oversight mechanism is not available for non-FFRDC divisions. During 1995, for example, the Air Force expressed great reluctance to support The Aerospace Corporation’s proposal to establish a non-FFRDC affiliate, indicating that the Air Force was concerned that it could not avoid the perception of a conflict of interest. Similarly, the MITRE Corporation sought permission to create a separate corporate division to perform non-FFRDC work. Recognizing that this arrangement could create a potential for conflicts of interest, DOD required MITRE to spin off a separate corporation to carry out its non-FFRDC activities. DOD required this new corporation to have a separate board of trustees and its own corporate officers. Further, DOD required that no work be subcontracted between the two entities to preclude the sharing of employees involved in DOD work—and knowledge developed in the course of DOD work—with the new corporation. DOD’s recent update of its action plan stated that a new policy requires the use of stringent criteria for the acceptance of work outside the core by the FFRDC’s parent corporation. According to DOD, this new policy will ensure focus on FFRDC operations by the parent and eliminate concerns regarding “unfair advantage” in acquiring of such work. Currently, DOD plans to revise its FFRDC management plan, which would provide for greater oversight of non-FFRDC affiliates at all centers. These changes would require FFRDCs to agree to conduct non-FFRDC activities only if the activities are (1) subject to sponsor review and approval, (2) in the national interest, and (3) do not give rise to real or potential conflicts of interest. Even though it endorsed the need for organizations such as FFRDCs, a DSB study recently concluded that the public mistrusted DOD’s use and oversight of FFRDCs. A principal concern, according to the study, is that DOD assigns work to FFRDCs that can be performed as effectively by private industry and acquired using competitive procurement procedures. Further, DSB found that the lack of opportunities for public review and comment on DOD’s process for managing and assigning work to FFRDCs—available in the competitive contracting process—invites mistrust. To address public skepticism about DOD’s use and management of FFRDCs, DSB recommended the creation of an independent advisory committee of highly respected personnel from outside DOD. The committee would review the continuing need for FFRDCs, FFRDC missions, and DOD’s management and oversight mechanisms for FFRDCs. DOD’s action plan also recommended the establishment of an independent advisory committee to review and advise on FFRDC management. In late 1995, an independent advisory committee was established. The six committee members, who are either DSB members or consultants, represent both industry and government. The committee is responsible for reviewing and advising DOD on the management of its FFRDCs by providing guidelines on the appropriate scope of work, customers, organizational structure, and size of the FFRDCs; overseeing compliance with DOD’s FFRDC Management Plan; reviewing sponsor’s management of FFRDCs; reviewing the level and appropriateness of non-DOD and nonsponsor work performed by the FFRDCs; overseeing the comprehensive review process; and performing selected FFRDC program reviews. In January 1996, the advisory committee began a series of panel discussions at several FFRDCs, which were attended by DOD sponsor personnel and FFRDC officials. Representatives of our office attended the initial fact finding meetings and observed that the panel members appear to approach their task with the utmost seriousness and challenged the conventional wisdom by asking tough questions of both DOD and FFRDC officials. The advisory group plans to produce its first report in March 1996. Mr. Chairman, this completes my statement for the record. Defense Research and Development: Fiscal Year 1993 Trustee and Advisor Costs at Federally Funded Centers (GAO/NSIAD-96-27, Dec. 26, 1995). Federal Research: Information on Fees for Selected Federally Funded Research and Development Centers (GAO/RCED-96-31FS, Dec. 8, 1995). Federally Funded R&D Centers: Use of Fee by the MITRE Corporation (GAO/NSIAD-96-26, Nov. 27, 1995). Federally Funded R&D Centers: Use of Contract Fee by The Aerospace Corporation (GAO/NSIAD-95-174, Sept. 28, 1995). Defense Research and Development: Affiliations of Fiscal Year 1993 Trustees for Federally Funded Centers (GAO/NSIAD-95-135, July 26, 1995). Department of Defense Federally Funded Research and Development Centers, Office of Technology Assessment (OTA-BP-ISS-157, June 1995). Compensation to Presidents, Senior Executives, and Technical Staff at Federally Funded Research and Development Centers, DOD Office of the Inspector General (95-182, May 1, 1995). Comprehensive Review of the Department of Defense’s Fee-Granting Process for Federally Funded Research and Development Centers, Director for Defense Research and Engineering, May 1, 1995. The Role of Federally Funded Research and Development Centers in the Mission of the Department of Defense, Defense Science Board Task Force, April 25, 1995. Addendum to Final Audit Report on Contracting Practices for the Use and Operations of DOD-Sponsored Federally Funded Research and Development Centers, DOD Office of the Inspector General (95-048A, Apr. 19, 1995). DOD’s Federally Funded Research and Development Centers, Congressional Research Service (95-489 SPR, Apr. 13, 1995). Report on Department of Defense Federally Funded Research and Development Centers and Affiliated Organizations, Director for Defense Research and Engineering, April 3, 1995. Federally Funded R&D Centers: Executive Compensation at The Aerospace Corporation, (GAO/NSIAD-95-75, Feb. 7, 1995). Contracting Practices for the Use and Operations of DOD-Sponsored Federally Funded Research and Development Centers, DOD Office of the Inspector General (95-048, Dec. 2, 1994). Sole Source Justifications for DOD-Sponsored Federally Funded Research and Development Centers, DOD Office of the Inspector General (94-012, Nov. 4, 1993). DOD’s Federally Funded Research and Development Centers, Congressional Research Service (93-549 SPR, June 3, 1993). Inadequate Federal Oversight of Federally Funded Research and Development Centers, Subcommittee on Oversight of Government Operations, Senate Governmental Affairs Committee (102-98, July 1992). DOD’s Federally Funded Research and Development Centers, Congressional Research Service (91-378 SPR, Apr. 29, 1991). Competition: Issues on Establishing and Using Federally Funded Research and Development Centers (GAO/NSIAD-88-22, Mar. 7, 1988). Fiscal year 1995 dollars in millions Systems engineering and integration centers The Aerospace Corp. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed the Department of Defense's (DOD) efforts to improve the management of its federally funded research and development centers (FFRDC), focusing on the: (1) guidelines to ensure that management fees paid to FFRDC are justified; (2) core work appropriate for FFRDC; (3) criteria for the acceptance of work outside of the core by FFRDC parent corporations; and (4) establishment of an independent advisory committee to review DOD management, use, and oversight of FFRDC. GAO noted that: (1) the DOD action plan recommended that management fees be revised to move allowable costs out of fee, reduce fees, and establish policies on ordinary and necessary costs; (2) it is difficult to determine whether tasks assigned to FFRDC meet core work criteria because the mission statements are broad and the core competencies offer little deviation from previous work descriptions; (3) six of the eight parent organizations that operate FFRDC also operate one or more non-FFRDC affiliates; and (4) DOD established an independent advisory committee to review FFRDC work, customers, and organizational structure and size, oversee FFRDC compliance with the DOD FFRDC management plan, review sponsor's management of FFRDC, determine the level and appropriateness of non-DOD and non-sponsor work performed by FFRDC, monitor the comprehensive review process, and perform selected FFRDC program reviews.
The Coast Guard has a wide variety of missions, related both to homeland security and its other responsibilities. Table 1 shows a breakout of these missions—both security and non-security related—as delineated under the Homeland Security Act of 2002. The Coast Guard has overall federal responsibility for many aspects of port security and is involved in a wide variety of activities. Using its cutters, boats, and aircraft, the Coast Guard conducts security patrols in and around U.S. harbors, escorts large passenger vessels in ports, and provides protection in U.S. waterways for DOD mobilization efforts. It also gathers and disseminates intelligence information, including gathering information on all large commercial vessels calling at U.S. ports; the agency monitors the movement of many of these vessels in U.S. territorial waters. It conducts port vulnerability assessments; helps state and local port authorities to develop security plans for protecting port infrastructure; and actively participates with state, local, and federal port stakeholders in a variety of efforts to protect port infrastructure and ensure a smooth flow of commerce. In international maritime matters, the Coast Guard is also active in working through the International Maritime Organization to improve maritime security worldwide. It has spearheaded proposals before this organization to implement electronic identification systems, ship and facility security plans, and the undertaking of port security assessments. The Coast Guard’s homeland security role is still evolving; however, its resource commitments to this area are substantial and will likely grow. For example, under the recently enacted Maritime Transportation Security Act, the Coast Guard will likely perform numerous security tasks, such as approving security plans for vessels and waterside facilities, serving on area maritime security advisory committees, assessing antiterrorism measures at foreign ports, and maintaining harbor patrols. The Coast Guard has not yet estimated its costs for these activities; however, the President’s fiscal year 2004 budget request includes over $200 million for new homeland security initiatives, including new patrol boats, additional port security teams, and increased intelligence capabilities. To provide for the orderly transition of the Coast Guard to DHS on March 1, 2003, the Coast Guard established a transition team last year that identified and began addressing issues that needed attention. Coast Guard officials told us that they patterned their transition process after key practices that we identified as important to successful mergers, acquisitions, and transformations. The agency’s transition team consists of top management, led by the Chief of Staff, and enlists the assistance of numerous staff expertise throughout the agency through matrixing. According to Coast Guard officials, the scope of transition issues spans a wide variety of topics, including administrative and support functions, strategy, outreach and communication issues, legal considerations, and information management. The transition team focuses on both DHS- related issues and on issues related to maintaining an enduring relationship with the Department of Transportation (DOT). In addition to its own transition team, senior Coast Guard officials participated with OMB in developing the DHS reorganization plan late last year. Also, key Coast Guard officials participate on joint DHS and DOT transition teams that have been established to deal with transition issues in each department. We have testified that, despite the complexity and enormity of the implementation and transformation of DHS, there is likely to be considerable benefit over time from restructuring homeland security functions. These benefits include reducing risk and improving the economy, efficiency, and effectiveness of these consolidated agencies and programs. In the short term, however, there are numerous complicated challenges that will need to be resolved, making implementation a process that will take considerable time and effort. Reorganizations frequently encounter start-up problems and unanticipated consequences, and it is not uncommon for management challenges to remain for some time. Our past work on government restructuring and reorganization has identified a number of factors that are critical to success in these efforts. Coast Guard officials now involved in transition efforts told us that they are aware of these factors and are addressing many of them as they prepare to move to DHS. Our testimony today focuses on six of these factors—strategic planning, communication and partnership-building, performance management, human capital strategy, information management and technology, and acquisition management—and, based on past work, some of the key challenges the Coast Guard faces in addressing and resolving them. The strategic planning process involves assessing internal and external environments, working with stakeholders, aligning activities, processes, and resources in support of mission-related outcomes. Strategic planning is important within the Coast Guard, which now faces a challenge in merging past planning efforts with the new realities of homeland security. The events of September 11th produced a dramatic shift in resources used for certain missions. Cutters and patrol boats that were normally used offshore were quickly shifted to coastal and harbor security patrols. While some resources have been returned to their more traditional activities, others have not. For example, Coast Guard patrol boats in the nation’s Northeast were still conducting security patrols many months later, reducing the number of fisheries patrols by 40-50 percent from previous years. Even now, the Coast Guard continues to face new security-related demands on its resources. Most notably, as part of the current military build-up in the Middle East, the Coast Guard has sent nine cutters to assist the DOD in the event of war with Iraq. While its greatly expanded homeland security role has already been merged into its day-to-day operations, the Coast Guard faces the need to develop a strategic plan that reflects this new reality over the long term. Where homeland security once played a relatively small part in the Coast Guard’s missions, a new plan must now delineate the goals, objectives, strategies, resource requirements, and implementation timetables for achieving this vastly expanded role while still balancing resources among its various other missions. The agency is now developing a strategic deployment plan for its homeland security mission and plans to finish it sometime this year. However, development has not begun on a long-term strategy that outlines how it sees its resources—cutters, boats, aircraft, and personnel—being distributed across all of its various missions, as well as a timeframe for achieving desired balance among missions. We recommended in a recent report to this Subcommittee that the Coast Guard develop such a strategy to provide a focal point for all planning efforts and serve as a basis for spending and other decisions. The Coast Guard has taken this recommendation under advisement but has not yet acted on it. There is a growing realization that any meaningful results that agencies hope to achieve are likely to be accomplished through matrixed relationships or networks of governmental and nongovernmental organizations working together. These relationships exist on at least three levels. First, they exist within and support the various internal units of an agency. Second, they include the relationships among the components of a parent department, such as DHS. Third, they are also developed externally, to include relationships with other federal, state, and local agencies, as well as private entities and domestic and international organizations. Our work has shown that agencies encounter a range of barriers when they attempt coordination across organizational boundaries. Such barriers include agencies’ concerns about protecting jurisdictions over missions and control of resources, differences in procedures, processes, data systems that lack interoperability, and organizational cultures that may make agencies reluctant to share sensitive information. Specifically, our work has shown that the Coast Guard faces formidable challenges with respect to establishing effective communication links and building partnerships both within DHS and with external organizations. While most of the 22 agencies moving to DHS will report to under secretaries for the department’s various directorates, the Coast Guard will remain a separate entity reporting directly to the Secretary of DHS. According to Coast Guard officials, the Coast Guard has important functions that will require coordination and communication with all of these directorates, particularly the Border and Transportation Security Directorate. For example, the Coast Guard plays a vital role with Customs, Immigration and Naturalization Service, the Transportation Security Administration, and other agencies that are organized in the Directorate of Border and Transportation Security. Because the Coast Guard’s homeland security activities require interface with these and a diverse set of other agencies organized within several DHS directorates, communication, coordination, and collaboration with these agencies is paramount to achieve department-wide results. Effective communication and coordination with agencies outside the department is also critical to achieving the homeland security objectives, and the Coast Guard must maintain numerous relationships with other public and private sector organizations outside DHS. For example, according to Coast Guard officials, the Coast Guard will remain an important participant in DOT’s strategic planning process, since the Coast Guard is a key agency in helping to maintain the maritime transportation system. Also, the Coast Guard maintains navigation systems used by DOT agencies such as the Federal Aviation Administration. In the homeland security area, coordination efforts will extend well beyond our borders to include international agencies of various kinds. For example, the Coast Guard, through its former parent agency, DOT, has been spearheading U.S involvement in the International Maritime Organization. This is the organization that, following the September 11th attacks, began determining new international regulations needed to enhance ship and port security. Also, our work assessing efforts to enhance our nation’s port security has underscored the formidable challenges that exist in forging partnerships and coordination among the myriad of public and private sector and international stakeholders. A performance management system that promotes the alignment of institutional, unit, and individual accountability to achieve results is an essential component for organizational success. Our work has shown performance management is a key component of success for high- performing, results-oriented organizations. High-performing organizations have recognized that a key element of a fully successful performance management system is aligning individual employees’ performance expectations with agency goals so that employees can see how their responsibilities contribute to organizational goals. These organizations (1) define clear missions and desired outcomes, (2) measure performance as a way of gauging progress toward these outcomes, and (3) use performance information as a basis for decision-making. In stressing these actions, a good performance management system fosters accountability. The changed landscape of national security work presents a challenge for the Coast Guard’s own performance management system. The Coast Guard has applied the principles of performance management for most of its missions, but not yet for homeland security. However, the Coast Guard has work under way to define its homeland security mission and the desired outcomes stemming from that mission. The Coast Guard expects to have such measures this year and begin collecting data to gauge progress in achieving them. Progress in this area will be key in the Coast Guard’s ability to make sound decisions regarding its strategy for accomplishing its security mission as well as its various other missions. In any organization, people are its most important asset. One of the major challenges agencies face is creating a common organizational culture to support a unified mission, common set of core values, and organization- wide strategic goals. The Coast Guard, like the 21 other agencies moving to DHS, will have to adjust its own culture to work effectively within the department. The Coast Guard also faces other important new human capital challenges. For example, to deal with its expanded homeland security role and meet all of its other responsibilities, the Coast Guard expects to add thousands of new positions over the next 3 years. The Coast Guard acknowledges that such a large increase could well strain the agency’s ability to hire, develop, and retain talent. Coast Guard officials acknowledge that providing timely training for the 2,200 new personnel it plans to bring on by the end of fiscal year 2003 and the additional 1,976 staff it plans to add by the end of fiscal year 2004 will likely strain its training capabilities. Compounding this challenge is that over the next decade, the Coast Guard is modernizing its entire fleet of cutters and aircraft with more modern, high technology assets that require a higher skill level to operate and maintain. One factor that often contributes to an organization’s ineffectiveness or failure is the lack of accurate, complete, and timely information. Sometimes this lack of information contributes to the failure of a system or to cumbersome systems that cannot be effectively coordinated. In other instances, however, it can relate to the institutional willingness to share information across organizational boundaries. Concerns about information management have been well chronicled in the discussions about establishing DHS. Programs and agencies will be brought together from throughout the government, each bringing its own systems. Integrating these diverse systems will be a substantial undertaking. The Coast Guard is among several agencies moving to DHS that will bring with it existing information technology problems. For example, 14 years after legislation was passed requiring the Coast Guard to develop a vessel identification system to share vessel information, no such system exists, and future plans for developing the system are uncertain. Given today’s heightened state of homeland security, such a system has even more potential usefulness. Coast Guard officials stated that law enforcement officials could use a vessel identification system to review all vessels that have been lost or stolen and verify ownership and law enforcement history. Sound acquisition management is central to accomplishing the department’s mission. DHS is expected to spend billions annually to acquire a broad range of products, technologies, and services. Getting the most from this investment will depend on how well DHS manages its acquisition activities. Our reports have shown that some of the government’s largest procurement operations need improvement. The Coast Guard has major acquisitions that pose significant challenges. The agency is involved in two of the most costly procurement programs in its history—the $17 billion Integrated Deepwater Project to modernize its entire fleet of cutters and aircraft, and the $500 million national response and distress system, called Rescue 21, to increase mariner safety. We have been reviewing the planning effort for the Deepwater Project for a number of years, and the agency’s management during the planning phase was among the best of the federal agencies we have evaluated, providing a solid foundation for the project. While we believe the Coast Guard is in a good position to manage this acquisition effectively, the current phase of the project represents considerably tougher management challenges. The major challenges are: Controlling costs. Under the project’s contracting approach, the responsibility for the project’s success lies with a single systems integrator and its contractors for a period of 20 years or more. This approach starts the Coast Guard on a course potentially expensive to alter once funding has been committed and contracts have been signed. Moreover, this approach has never been used on a procurement of this size or complexity, and, as a result, there are no models in the federal government to guide the Coast Guard in developing its acquisition strategy. In response to the concerns we and others have raised about this approach, the Coast Guard developed cost-related processes and policies, including establishing prices for deliverables, negotiating change order terms, and developing incentives. Stable sustained funding. The project’s unique contracting approach is based on having a steady, predictable funding stream of $500 million in 1998 dollars ($544.4 million in 2003 dollars) over the next 2 to 3 decades. Significant reductions in levels from planned amounts could result in reduced operations, increased costs, and/or schedule delays, according to the Coast Guard. Already the funding stream is not materializing as the Coast Guard planned. The 2002 fiscal year appropriation for the project was about $18 million below the planned level. The fiscal year 2003 transportation appropriations have not yet been signed into law; however, the Senate appropriations committee has proposed $480 million for the Deepwater Project, and the House appropriations committee proposed $500 million. Contractor oversight. Because the contracting approach is unique and untried, the challenges in managing and overseeing the project will become more difficult. To address these challenges, the Coast Guard’s plans require the systems integrator to implement many management processes and procedures according to best practices. While these practices are not yet fully in place, in May 2002, the Coast Guard released its Phase 2 Program Management Plan, which establishes processes to successfully manage, administer, monitor, evaluate, and report contract performance. Unproven technology. Our reviews of other acquisitions have shown that reliance on unproven technology is a frequent contributor to escalated costs, schedule and delays, and compromised performance standards. While the Coast Guard has successfully identified technologies that are sufficiently mature, commercially available, and proven in similar applications for use in the first 7 years of the project, it has no structured process to assess and monitor the potential risk of technologies proposed for use in later years. Specifically, the Coast Guard has lacked uniform and systematic criteria, which is currently available, to judge the level of a technology’s readiness, maturity, and risk. However, in response to our 2001 recommendation, the Coast Guard is incorporating a technology readiness assessment in the project’s risk management process. Technology readiness level assessments are to be performed for technologies identified in the design and proposal preparation and procurement stages of the project. For these and other reasons, our most recent series of Performance and Accountability Reports continues to list the Deepwater Project as a project meriting close management attention. We will continue to assess the department’s actions in these areas. The Coast Guard’s move to DHS may complicate these challenges further. For example, central to the acquisition strategy for the Deepwater Project is a clear definition of goals, needs, and performance capabilities, so that a contractor can design a system and a series of acquisitions that can be carried out over 2 to 3 decades, while meeting the Coast Guard’s needs throughout this time. These system goals and needs were all developed prior to September 11th. Whether the Coast Guard’s evolving homeland security mission will affect these requirements remains to be seen. Properly aligning this program within the overall capital needs of DHS is critical to ensuring the success of the Deepwater Project. Also, the Homeland Security Act of 2002 requires the Secretary of DHS to submit a report to the Congress on the feasibility of accelerating the rate of procurement of the Deepwater Project. If the project is accelerated, even greater care would need to be exercised in managing a project that already carries numerous risks. In conclusion, these challenges are daunting but not insurmountable. The Coast Guard continues to do an admirable job of adapting to its new homeland security role through the hard work and dedication of its people, and it has the management capability to address the implementation issues discussed here as well. However, reorganizations frequently encounter startup problems and unanticipated consequences, and even in the best of circumstances, implementation is a lengthy process that requires a keen focus, the application of sound management principles, and continuous reexamination of challenges and issues associated with achieving desired outcomes. As the Coast Guard addresses these and other challenges in the future, we will continue to monitor its efforts as part of our ongoing work on homeland security issues, and we will be prepared to report to you on this work as you deem appropriate. Madame Chair, this concludes my testimony today. I would be pleased to respond to any questions that you or members of the Subcommittee may have at this time. For information about this testimony, please contact JayEtta Z. Hecker, Director, Physical Infrastructure, at (202) 512-2834, or [email protected]. Individuals making key contributions to this testimony include Christopher Jones, Sharon Silas, Stan Stenersen, and Randall Williamson. Major Management Challenges and Program Risks: Department of Transportation. GAO-03-108. Washington, D.C.: January 30, 2003. Major Management Challenges and Program Risks: Department of Homeland Security. GAO-03-102. Washington, D.C.: January 30, 2003. Homeland Security: Management Challenges Facing Federal Leadership. GAO-03-260. Washington, D.C.: December 20, 2002. Container Security: Current Efforts to Detect Nuclear Materials, New Initiatives, and Challenges. GAO-03-297T. New York, NY: November 18, 2002. Highlights of a GAO Forum: Mergers and Transformation: Lessons Learned for a Department of Homeland Security and Other Federal Agencies. GAO-03-293SP. Washington, D.C.: November 14, 2002. Coast Guard: Strategy Needed for Setting and Monitoring Levels of Effort for All Missions. GAO-03-155. Washington, D.C.: November 12, 2002. National Preparedness: Technology and Information Sharing Challenges. GAO-02-1048R. Washington, D.C.: August 30, 2002. Homeland Security: Effective Intergovernmental Coordination Is Key to Success. GAO-02-1011T. Washington, D.C.: August 20, 2002. Port Security: Nation Faces Formidable Challenges in Making New Initiatives Successful. GAO-02-993T. Washington, D.C.: August 5, 2002. Homeland Security: Critical Design and Implementation Issues. GAO- 02-957T. Washington, D.C.: July 17, 2002. Managing for Results: Using Strategic Human Capital Management to Drive Transformational Change. GAO-02-940T. Washington, D.C.: July 15, 2002. Homeland Security: Title III of the Homeland Security Act of 2002. GAO- 02-927T. Washington, D.C.: July 9, 2002. Homeland Security: Intergovernmental Coordination and Partnerships Will Be Critical to Success. GAO-02-899T. Washington, D.C.: July 1, 2002.
The Coast Guard is one of 22 agencies being placed in the new Department of Homeland Security. With its key roles in the nation's ports, waterways, and coastlines, the Coast Guard is an important part of enhanced homeland security efforts. But it also has non-security missions, such as search and rescue, fisheries and environmental protection, and boating safety. GAO has conducted a number of reviews of the Coast Guard's missions and was asked to testify about the Coast Guard's implementation challenges in moving to this newly created Department. The Coast Guard faces major challenges in effectively implementing its operations within the Department of Homeland Security. GAO has identified critical success factors for reorganizing and restructuring agencies, and its recent work in reviewing the Coast Guard has focused on challenges dealing with six of these factors--strategic planning, communications and partnership-building, performance management, human capital strategy, information management and technology, and acquisition management. The Coast Guard faces challenges in all of these areas. The difficulty of meeting these challenges is compounded because the Coast Guard is not just moving to a new parent agency: it is also substantially reinventing itself because of its new security role. Basically, the agency faces a fundamental tension in balancing its many missions. It must still do the work it has been doing for years in such areas as fisheries management and search and rescue, but now its resources are deployed as well in homeland security and even in the military buildup in the Middle East. The Coast Guard's expanded role in homeland security, along with its relocation in a new agency, have changed many of its working parameters, and its adjustment to this role remains a work in process. Much work remains. Some of the work is strategic in nature, such as the need to define new missions and redistribute resources to meet the wide range of missions. Others include accommodating a sudden surge of new positions or trying to ensure that its most ambitious acquisition project--the Deepwater Project--remains viable.
The U.S. railroad industry consists mostly of freight railroads but also serves passengers. Freight railroads are divided into classes based on revenue. Class I freight railroads earn the most revenue and generally provide long-haul freight service. Freight railroads operate over approximately 160,000 miles of track and own most of the track in the United States; a notable exception is the Northeast Corridor, between Washington, D.C., and Boston, Massachusetts, which Amtrak predominantly owns. Amtrak provides intercity passenger rail service in 46 states and the District of Columbia and operates on 21,000 miles of track. Commuter railroads serve passengers traveling within large metropolitan areas and most operate over track infrastructure owned by Amtrak or freight railroads for at least some portion of their operations. Specifically, 9 commuter railroads operate over Amtrak-owned infrastructure. Sixteen commuter railroads operate over infrastructure owned by freight railroads. U.S. freight and passenger trains often share track, dispatchers, and signals that control train movement. Some railroads also use additional technologies to improve efficiency and achieve business benefits. Currently, dispatchers in centralized offices issue train movement authorities that allow trains to enter specific track segments, or blocks. These authorities are communicated to train operators through signals alongside the track, or in non-signaled territory through track warrants generally issued by verbal radio communication (see fig. 1).Railroads also use additional technologies to maximize operational efficiencies. These include: Computer-assisted dispatching so dispatchers can, among other things, optimally synchronize schedules, allowing trains on single track to “meet and pass” one another safely and efficiently, thereby minimizing delays and improving on-time performance. Energy management systems that analyze train location and track grade and curvature information to calculate the train’s most fuel- efficient speed throughout the trip. These technologies can lead to business benefits for the railroad as well as benefits for society at large. As we have reported in the past, diversion of freight traffic from highways to rail potentially increases highway safety and reduces highway congestion and energy consumption. Although train accidents have generally been on the decline in recent years, human factors such as train operators missing a red signal or exceeding allowable speeds, or train crews leaving a switch in the wrong position can lead to significant damage and loss of life. Overall, rail safety—measured by the train accident rate per million train miles—has improved markedly since 1980. According to FRA data, 2012 was the safest year in railroad history. Even with the significant reduction in accident rates, on average almost 300 people were reported injured and about 10 people were reported killed in train accidents each year, from 2003 through 2012. PTC is a computer-based technology that uses a communications system to monitor and control train movements to minimize human factor errors. Prior to the enactment of the RSIA in 2008, railroads developed and tested several PTC systems but deployed them on a limited basis. For example, in 1998, while Amtrak was upgrading the Northeast Corridor to enable operation of high-speed passenger trains—a service known today as Acela—FRA directed Amtrak to install a new train control system on some portions of the corridor as a safety measure. Amtrak worked with suppliers to develop a form of PTC—known as Advanced Civil Speed Enforcement System (ACSES)—and deployed this system on the Northeast Corridor. In wake of the Chatsworth rail accident in September 2008 and other high-profile rail accidents, RSIA was enacted. RSIA, among other things, required railroads to install PTC by December 31, 2015, on mainlines used to transport inter-city rail passengers, commuters, or any amount of toxic-by-inhalation materials. RSIA requires railroads to install PTC systems, which are designed to prevent train-to-train collisions and derailments caused by exceeding safe speeds. PTC must also be designed to protect rail workers by preventing trains from entering work zones as well as to prevent the movement of trains through switches left in the wrong position. PTC’s communications-based system links various components, namely locomotive computers, wayside units along the side of the track, and dispatch systems in centralized office locations (see fig. 2). Through these components, PTC is able to communicate a train’s location, speed restrictions, and movement authorities, and can slow or stop a train that is not being operated safely. For example, a PTC system could have prevented the 2008 Chatsworth accident by first alerting the operator that the train was approaching a red signal and then stopping the train before passing the red signal. However, it should be noted that there are types of accidents, such as highway-railroad crossing accidents and trespasser deaths, that PTC technology is not designed to prevent. According to FRA, highway-railroad crossing and trespasser deaths account for 95 percent of all rail-related fatalities. RSIA does not require railroads to implement the same PTC system; however, the various PTC systems must meet the PTC system functionality requirements. There are two primary ways PTC can be implemented—as an overlay or as a standalone system. An overlay system involves installing PTC over existing track equipment to work in conjunction with the existing signal system and the train’s current method of operations. A standalone system involves taking information currently communicated through the signal system and putting it onboard the locomotive, effectively eliminating the need for the existing signal system. Whatever PTC system a railroad implements, RSIA requires that systems be interoperable, meaning they must be able to communicate with one another so trains can seamlessly move across track owned by different railroads with potentially different PTC systems. Interoperability is important given that, according to FRA, there are 37 freight, intercity passenger, and commuter railroads that are required to implement PTC. To implement the requirements of RSIA, FRA has conducted three rulemakings that resulted in: (1) a 2010 final rule, (2) a 2012 final rule, and (3) a 2012 Notice of Proposed Rulemaking (NPRM), which is currently not finalized (see fig. 3). In the original 2010 rule, FRA used facts and data known in 2008 to determine where PTC implementation should occur. Recognizing that traffic levels and routing could change between 2008 and the statutory deadline in 2015, the 2010 rule provided railroads with the option to request an amendment to not equip a track segment where the railroad was initially required to install PTC, but may no longer be required to do so. In order for certain rail segments to be excluded, the segments would need to pass two qualifying tests. After FRA finalized the 2010 rule, the Association of American Railroads (AAR) challenged the two qualifying tests in a lawsuit, and FRA and AAR entered into a settlement agreement in which FRA agreed to propose elimination of the tests. The two qualifying tests were eliminated in the 2012 final rule; as a result, railroads do not have to implement PTC on rail segments that will not transport toxic-by-inhalation materials or passengers as of December 31, 2015. The FRA rulemaking that is currently under way addresses how railroads will handle en-route failures of PTC equipment, among other things. In accordance with Executive Order 12866, FRA prepared economic analyses—also known as regulatory impact analyses—to assess the benefits and costs of PTC before promulgating regulations. Specifically, FRA issued two regulatory impact analyses evaluating final rules—one dated December 2009 evaluating the 2010 final rule and one dated January 2012 evaluating the 2012 final rule. Executive orders and OMB guidance direct agencies to assess the benefits and costs of regulatory alternatives. Agencies should generally select the regulatory approach that maximizes net benefits to society, unless a statute requires otherwise. OMB developed guidelines to encourage good regulatory impact analysis and to standardize the way that benefits and costs of federal regulations are measured and reported. OMB guidelines generally direct agencies, in analyzing the impacts of rules, to, among other things: measure the potential social benefits and costs of regulatory alternatives incremental to a “baseline,” (i.e., the conditions that would exist in the absence of the proposed regulation); analyze a range of alternatives; identify and quantitatively analyze key uncertainties associated with the estimates of benefits and costs; and provide documentation that the analysis is based on the best reasonably obtainable scientific, technical, and economic information available. OMB guidelines further state that a good regulatory analysis includes identifying the regulatory alternative with the largest net benefits to society. It also states that such information is useful for decision makers and the public, even when economic efficiency is not the only or the overriding public policy objective. As part of overseeing railroads’ progress with PTC implementation, FRA is also responsible for reviewing railroads’ PTC-related plans. Railroads must submit and FRA must review and approve three plans: a PTC development plan, a PTC implementation plan, and a PTC safety plan. The PTC development plan describes, among other things, the PTC system a railroad intends to implement to satisfy the PTC regulatory requirements. According to its August 2012 report, FRA’s approval of the development plans took nearly 18 months to complete. The PTC implementation plan describes a railroad’s plan for installation of its planned PTC system. RSIA required railroads to submit these plans within 18 months (by April 16, 2010), and FRA to review and approve or disapprove them within 90 days. The PTC safety plan includes a railroad’s plans for testing the system, as well as information about safety hazards and risks the system will address, among other things. By approving a safety plan, FRA certifies a railroad’s PTC system, a precondition for operating the PTC system in revenue service. Although FRA set no specific deadline for railroads to submit the safety plans, according to FRA, railroads must submit their safety plans with sufficient time for approval before the December 31, 2015, PTC implementation deadline. In its August 2012 report, FRA reported to need about 6 to 9 months to review each safety plan. Although there are two primary types of PTC systems—overlay and standalone— that functionally meet the PTC requirements in RSIA, almost all railroads required to install PTC are installing overlay systems. Railroad representatives told us they chose to install PTC as an overlay system because it was more feasible to meet the PTC implementation deadline than a standalone system. An overlay system allows railroads to install PTC components over existing rail infrastructure and operate the train in accordance with the existing signals and operations in the event of a PTC system failure. Of the various PTC overlay systems that have been developed, all seven major freight railroads in the United States plan to implement Interoperable Electronic Train Management System (I-ETMS), which will account for most of the approximately 60,000 miles. Amtrak is implementing Advanced Civil Speed Enforcement System (ACSES) on the Northeast Corridor. Although ACSES and I-ETMS are functionally similar, they differ technologically. To determine train location, ACSES relies on track-embedded transponders while I-ETMS uses Global Positioning System (GPS) information (see fig. 4). Since most commuter railroads run over tracks owned by freight railroads or Amtrak, they are largely implementing the same systems developed by the freight railroads or Amtrak. For example, eight commuter rail systems that operate over Amtrak infrastructure on the Northeast Corridor—including major commuter systems in the New York City, Philadelphia, and Boston areas—are installing ACSES. FRA has reported that in order to implement PTC, railroads must design, produce, and install more than 20 major components such as data radios for locomotive communication, locomotive management computers, and back office servers. Once these components are developed and integrated, PTC must then be installed on rail lines throughout the country, which involves upgrading and installing thousands of items, as well as replacing approximately 12,000 signals (see table 1). Adding to the complexity of PTC installation is the need to ensure that individual railroad systems are fully interoperable, which requires that the potential problems across railroads be identified, isolated, and corrected through testing in labs and in the field. Railroads have invested billions in PTC implementation to-date, but anticipate spending billions more. In May 2013, AAR reported that by the end of 2012, railroads had spent about $2.8 billion on PTC implementation. According to AAR, the total cost to freight railroads for PTC implementation is estimated to be approximately $8 billion. Despite the billions railroads have invested, much of the work to implement PTC remains to be done. For example, AAR reported that as of the end of 2012, about a third of wayside interface units— which are needed to communicate data—had been installed. In addition, AAR reported that as of the end of 2012, less than 1 percent of locomotives needing upgrades had been fully equipped. Most railroads report they will not complete PTC implementation by the 2015 deadline due to numerous interrelated challenges caused by the breadth and complexity of PTC. Both AAR and FRA have reported that most railroads will not have PTC fully implemented by the deadline. Of the four major freight railroads we included in our review, BNSF is the only railroad expecting to meet the 2015 deadline. According to BNSF representatives, it is on schedule to meet the 2015 deadline because of its extensive experience working on PTC prior to RSIA, its iterative build and test approach, and the concurrent development of its PTC dispatching and back office systems. Of the three remaining freight railroads we spoke to, representatives believe they will likely have PTC fully implemented by 2017 or later. In addition, while Amtrak officials report that they anticipate full PTC implementation on their Northeast Corridor and Michigan lines by the end of 2015, they noted it is unlikely they will have equipped the approximately 300 locomotives that will run on I-ETMS freight lines by the deadline. Commuter railroads generally must wait to equip their locomotives until freight railroads and Amtrak equip the rail lines that commuter railroads generally operate on. Four of the seven commuter railroads we included in our review reported that they will be unable to meet the 2015 PTC implementation deadline. Challenges to meeting the 2015 deadline are complex and interrelated. For instance, many of the PTC components had not been developed before RSIA was enacted, and some continue to be in various stages of development. In addition, all components, once developed must be assembled and integrated to achieve the overall safety function of PTC. Likewise, the steps involved with implementing PTC are interrelated, with delays or problems with one component or process resulting in additional delays. Railroad representatives told us that once all the components have been assembled, integrated, and tested for reliability, rolling out and phasing in a PTC system into each railroad’s network will take a considerable amount of time. For example, Amtrak first conducted a demonstration test of its PTC system on its Michigan line in 1996, but it was 5 years later, in 2001, when the system was put into service. Finally, FRA’s resources and ability to help facilitate implementation by the 2015 PTC deadline are limited. Below is a discussion of these key interrelated challenges. Developing system components and PTC installation. Some PTC components are still in development, most notably the I-ETMS back office server. One or more of these servers will be installed in over a dozen railroads’ back offices and are needed to communicate vital information between the back office, locomotives, and waysides. According to AAR and the railroads, back office system delays are due to system complexity, interfaces to other systems, and lack of supplier resources. Nearly all of the freight railroads included in our review anticipate they will not have a final version of the back office system until 2014 and have identified it as one of the critical factors preventing them from meeting the deadline. In addition to component development, PTC installation is a time- and resource-consuming process. For example, railroads collectively will have to install approximately 38,000 wayside interface units. According to AAR and freight railroads, the volume and complexity of installing these signals is another significant reason most railroads cannot meet the 2015 deadline. Railroads have also encountered unexpected delays while installing PTC. For example, the Federal Communications Commission (FCC) recently requested railroads halt their construction of radio antennae towers to allow FCC to consider how to implement oversight of the towers being installed for PTC. According to FRA and AAR officials, FCC requested that railroads halt construction on antennae towers that have not gone through the environmental evaluation process, including tribal notice, while FCC considers ways to streamline the process. FRA officials told us they did not anticipate this issue. AAR and FRA officials report they are working together with FCC to find a solution that meets the goals behind the process while still allowing for timely PTC deployment. However, the impact of halting construction on the towers may result in additional delays in railroads’ time frames. System integration and field testing. Successful PTC implementation will require numerous components to work together, many of which are first-generation technologies being designed and developed for PTC. All components must properly function when integrated or the PTC system could fail. To ensure successful integration, railroads must conduct multiple phases of testing—first in a laboratory environment, then in the field—before installation across the network. Representatives from all of the freight railroads we spoke with expressed concern with the reliability of PTC and emphasized the importance of field testing to ensure that the system performs the way it is intended and that potential defects are identified, corrected, and re-tested. One railroad representative we spoke with said that in some field tests, the PTC system components behaved differently than in the laboratory tests because labs do not reflect field conditions completely. Identifying the source of these types of problems is an iterative process; consequently, correcting the problems and re- testing can be time-consuming and potentially further contribute to railroads not meeting the 2015 deadline. FRA resources. Although most railroads we spoke with said they have worked closely with FRA throughout the PTC implementation process, some railroads cited concerns with FRA’s limited staffing resources. These concerns focused on two of FRA’s responsibilities. First, FRA officials must verify field testing of PTC. However, FRA reported that it lacks the staffing resources to embed a dedicated FRA inspector at each railroad for regular, detailed, and unfiltered reporting on railroads’ PTC progress. To address the lack of staff to verify field- testing, FRA has taken an audit approach to field testing, whereby railroads submit field test results for approval as part of their safety plans and FRA staff select plans to evaluate the accuracy of the results. Second, before a railroad can operate a PTC system in revenue service, it must be FRA certified, and FRA must approve the railroad’s final safety plan. FRA set no specific deadline for railroads to submit the safety plans, and according to FRA, to-date only one railroad has submitted a final safety plan, which FRA has approved. As it reported in its 2012 report to Congress, FRA’s PTC staff consists of 10 PTC specialists and 1 supervisor who are responsible for the review and approval of all PTC final safety plans. FRA also reported that this work covers the 37 railroads implementing PTC on over 60,000 miles of track. FRA and railroads have expressed concern that railroads will submit their final safety plans to FRA at approximately the same time, resulting in a potential review backlog particularly since each plan is expected to consist of hundreds of pages of detailed technical information. FRA officials told us that they are dedicated to the timely approval of safety plans and that their oversight will not impede railroads from meeting the deadline. However, railroads report that their time frames are based on a quick turnaround in approvals from FRA. If approvals are delayed, it could be a further setback in railroads’ PTC implementation. Generally commuter railroads face these same PTC implementation challenges, as well as others. First, because commuter railroads are using the PTC systems developed by freight railroads and Amtrak, they are captive in many respects to the pace of developments of those entities and have few means to influence implementation schedules. Commuter railroads also face challenges in funding PTC implementation due to the overall lack of federal funding available to make investments in commuter rail and limited sources of revenue. Most commuter railroads are non-profit, public operations that are funded by passenger fares and contributions from federal, state, and local sources. Economic challenges such as the recession have eroded state and local revenue sources that traditionally supported capital expenses. In addition, according the American Public Transportation Association (APTA), commuter railroads face competing expenses such as state of good repair upgrades, leaving them with limited funding to implement PTC. According to APTA, collectively, PTC implementation will cost commuter railroads a minimum of $2 billion. Finally, commuter railroads report that obtaining radio frequency spectrum—essential for PTC communications—can be a lengthy and difficult process. FCC directed commuter railroads to secure spectrum on the secondary market. According to the FCC, spectrum is available in the secondary market to meet PTC needs. While freight railroads have secured most of the spectrum needed for PTC implementation, commuter railroads have reported difficulty acquiring spectrum in the 220 megahertz (MHz) band, which is required to operate the data radios that communicate information between PTC components. In particular, railroad representatives said that obtaining spectrum is a critical challenge in high-density urban areas. Without acquiring sufficient spectrum, railroads may be unable to adequately test their PTC systems, potentially causing further delays in meeting the 2015 PTC deadline. By attempting to implement PTC by the 2015 deadline while key components are still in development, railroads may be making choices that could introduce financial and operational risks to PTC implementation. Representatives from freight railroads and FRA officials told us that railroads will not compromise the safety functions of the PTC system and will ensure that systems meet the functionality requirements in RSIA. However, freight railroad representatives told us that in order to work towards testing and installation, they compressed time frames and undertook processes in parallel rather than sequentially. For example, to begin installation while key components were being developed, railroads took a “double touch” approach to equipping locomotives, which involves taking locomotives out of service twice to begin installation while software was being developed. Railroad representatives told us this approach is more expensive than installing the equipment after the software is fully mature, as it involves more labor hours and more time that locomotives are out of service. Our prior work on weapon systems development has shown that technologies that were included in a product development program before they were mature later contributed to cost increases and schedule delays. This work showed that demonstrating a high level of maturity before new technologies are incorporated into a product development plan increases the chances for successful implementation. In 2010, we reported that railroads expected key PTC components to be available by 2012. Railroads have subsequently reported that PTC installation has involved many delays, particularly in component development and many of the essential components are still in development. Consequently, product maturity remains an issue for some PTC components and may result in additional cost and schedule overruns. The development time frames involved in implementing PTC by the end of 2015 also potentially introduce operational risks. Representatives from all of the freight railroads we spoke with expressed concern regarding the reliability of PTC and noted that adequate field testing was important to identify and correct problems. These representatives noted that without adequate testing, PTC systems may not perform as planned and may be more prone to system reliability issues, possibly causing service disruptions. FRA officials also expressed concern that if pressured to meet the 2015 deadline, railroads might implement an unreliable PTC system that breaks down and leads to operational inefficiencies through slower trains or congestion. In an August 2012 report to Congress, FRA identified three items for consideration in the event Congress amends RSIA. FRA officials told us that if Congress chooses to amend RSIA, additional authority to extend the deadline on certain rail lines, grant provisional certification of PTC systems and approve the use of alternative safety technologies in lieu of PTC would help them to conduct oversight more effectively by providing FRA flexibility in overseeing PTC. Specifically FRA requested the authority to: Extend the deadline on certain rail lines to grant railroads incremental deadlines on a case-by-case basis. FRA officials told us they do not want a deadline extension applied to the whole railroad industry. Rather, FRA would like flexibility to create new deadlines based on an individual railroad’s circumstances, particularly a railroad’s due diligence to achieve the 2015 deadline and efforts to mitigate risks. FRA officials said currently they are unable to approve implementation plans that give completion dates beyond 2015. FRA officials said that such a change would require railroads to update their implementation plans. Grant provisional certification of PTC systems under controlled conditions before final system completion to allow railroads to operate PTC in certain places while they are still developing it in other places. According to FRA, this would provide assurance that the PTC system was safe, so that a railroad could begin to use the PTC system while FRA reviewed the railroad’s full safety plan. FRA and railroads told us the benefit of this authority is that it would allow railroads and the public to experience the safety benefits of PTC sooner. FRA officials said they believed this would provide railroads with additional time to address issues and would lead to the implementation of a more reliable system. Approve the use of alternative safety technologies in lieu of PTC to allow railroads to improve safety and meet many of the functions of PTC through other means. FRA officials told us that they would anticipate using this authority only for commuter and some smaller railroads and would consider technologies in combination with operating rules that railroads demonstrate would enhance safety. Although some freight railroad representatives we spoke with supported providing FRA with additional authority, others voiced concerns about how the authorities would be administered. For example, details such as how FRA will identify and apply criteria to determine which railroads should receive extensions would need to be addressed. In addition, one freight railroad representative raised concerns over timeliness of FRA’s determinations of deadline extensions. Furthermore, representatives from another railroad suggested that granting deadline extensions to some railroads unfairly penalizes those railroads that may meet the PTC deadline. FRA could not provide us with specific information detailing how these authorities would be applied. However, if Congress were to amend RSIA in order to provide FRA additional authorities in implementing PTC, the Secretary of Transportation would need to direct FRA to develop new regulations or orders, in order to carry out its duties. At a June 2013 hearing on rail safety, AAR and APTA stated their support for FRA’s request for additional authority and extending the PTC implementation deadline to December 31, 2018, for all railroads. In addition, FRA recommended the Secretary of Transportation be given the authority to grant railroads extensions beyond a December 2018 deadline. In particular, AAR stated its support for FRA’s request for flexibility to extend the deadline and previously noted that FRA’s request to provide provisional certification of PTC systems could reduce delays. According to AAR, these authorities could provide some relief to railroads experiencing challenges meeting the deadline. APTA, representing commuter railroads, also supported FRA’s request for additional authority and specifically stated its support that FRA be allowed to consider alternative technologies in lieu of a PTC system on specified line segments. According to APTA’s testimony statement, some commuter railroads already have collision avoidance systems in place that protect against train-to-train collisions. According to APTA, allowing FRA to examine the feasibility of alternative technologies to PTC for some of the smaller railroads on a line-by-line basis could provide opportunities to free up PTC components for other railroads to expedite their PTC implementation. While an extension of the PTC implementation deadline may provide railroads with additional time to implement PTC, it is not clear that all railroads would be able to meet a revised December 31, 2018 deadline proposed by AAR and APTA. For example, AAR’s May 2013 report predicts that, while PTC could be in operation on most mandated PTC routes by December 31, 2018, the date PTC will be in operation on all routes would vary by railroad. One freight railroad we spoke to anticipated it would not be able to fully implement PTC until 2020. In addition, given that many commuter railroads are waiting for freight railroads to develop and implement PTC, many commuter railroads will likely have PTC fully installed after the freight railroads. Furthermore, in a hearing statement, AAR recommended flexibility beyond December 2018 due to the unprecedented nature of PTC and the uncertainties — both known and unknown—of implementation. Given the uncertainties in implementing PTC and the unexpected delays already encountered, additional challenges could prevent railroads from meeting a new deadline. However, FRA’s request for additional authority could provide railroads the flexibility to implement PTC on individual, case-by-case deadlines, either instead of or in addition to an overall deadline extension. Additional authority could also assist FRA in managing its limited staff resources and help railroads mitigate risks and ensure PTC is implemented in a safe and reliable manner. For example, although at the June 2013 rail safety hearing concerns were raised that providing railroads deadline extensions on a case-by-case basis would be resource-intensive and could provide additional challenges and delays, we found that railroads were at various stages in their implementation. Flexibility in extending the deadline for certain railroads acknowledges these differences and also may help FRA better manage limited resources by, for example, preventing a potential review backlog resulting from final safety plans being submitted at the same time—a concern raised by freight railroads and FRA. In addition, according to FRA, allowing provisional certification of PTC systems not only helps to manage limited resources, it also reflects good engineering practice in implementing wide-ranging, complex systems and is a well documented risk mitigation strategy. Finally, as outlined in APTA’s testimony at the June 2013 hearing on rail safety, allowing some railroads to use alternative technologies on certain lines could provide relief to other railroads struggling to procure certain PTC components. FRA’s final regulatory impact analysis for the 2010 final rule estimated that the costs of PTC installation far outweigh the safety benefits. FRA’s regulatory impact analysis presents an analysis of the costs and benefits associated with implementing a PTC system on qualifying rail segments. FRA estimated the total costs of implementing PTC to be about $13.2 billion and the total safety benefits to be about $674 million. Costs FRA anticipated to accrue to railroads through the implementation of PTC included: development of implementation plans and administrative functions related to the implementation and operation of PTC systems, including the information technology and communication systems that make up the central office; hardware costs for onboard locomotive-system components, including hardware costs for wayside system components, including installation; maintenance costs for all system components. FRA expects that PTC implementation will generate safety benefits from the reduction in the risk of certain types of accidents and the number and severity of casualties caused by train accidents on lines equipped with PTC systems. FRA also estimated benefits related to accident preventions that are anticipated to accrue, such as reductions in property damage, equipment cleanup, environmental damage, train delays resulting from track closures, road closures, emergency response, and evacuations. In addition to these safety benefits, FRA’s regulatory impact analysis stated that after PTC systems are refined, business benefits resulting from more efficient railroad operations could be forthcoming. FRA did not, however, include business benefits in its impact analysis estimates because of significant uncertainties regarding whether and when such benefits would be achieved. We found that FRA generally followed OMB guidance in assessing the benefits and costs of implementing PTC, and although we generally agree with FRA’s estimation that costs likely outweigh benefits, we are not confident in the precision of the specific estimates of costs and benefits. Specifically, we compared FRA’s regulatory impact analyses with key elements of OMB guidelines, including establishing a baseline, considering alternatives, analyzing uncertainty and quantifying key categories of costs and benefits. We identified some limitations in the analyses, for example, analyses are not comprehensive in some respects and the source and quality of some of the underlying data is unclear. According to FRA officials, the limitations in its analysis and data do not affect the primary outcome of the analysis— that total costs are expected to exceed total safety benefits (i.e., that there are negative net societal benefits). Based on our review, we also believe the limitations we identified were not significant enough to affect FRA’s general determination that PTC’s implementation costs outweigh benefits. (See app. II for more detail on our assessment of FRA’s regulatory impact analyses and findings.) The PTC mandate limited the flexibility and time available to FRA to develop a rule and analyze its economic impacts; nonetheless, more thorough analyses and better quality data could have made the benefit cost analysis more useful in discussions of PTC implementation. FRA’s PTC rulemaking was initiated to implement PTC, as required by RSIA. Specifically, RSIA mandated the installation of a PTC system, which can achieve certain safety benefits, and specified the system’s functional requirements and the 2015 implementation deadline. FRA had little latitude to implement other, non-PTC alternatives that may have been less costly to achieve the same safety benefits. In addition, FRA officials told us that because the PTC rulemaking process was expedited, they had to use the information that was available to them at the time to conduct their analysis. However, we found that some information was up to 10 years old and the quality of some of the underlying data was unclear. Finally, FRA excluded business benefits from its estimates, instead opting to include a discussion of potential business benefits in an appendix to its analysis. FRA officials said that they excluded business benefits from their analysis due to uncertainty about whether and when business benefits could be achieved. While we found this decision was appropriate, we found limitations to the discussion of business benefits. For example, FRA assumed that railroads would achieve business benefits associated with a standalone PTC system, but did not include supporting evidence that railroads would likely install such a system. Although an overlay PTC system alone is not expected to generate business benefits, over time and with additional investments, there may be opportunities for railroads to achieve some business benefits. PTC implementation involves upgrades that railroads could integrate with existing technologies to provide operational enhancements. As previously discussed, railroads are making substantial investments in their rail network infrastructure to implement PTC. These investments include (1) upgrading existing wayside and office subsystem components; (2) installing a new communication infrastructure to facilitate the communication of train speed, train location, work zone and switch information; (3) and developing detailed geographical information systems (GIS) mappings of an entire rail network. The first two investments can help to generate information that can be shared with other applications, such as train dispatching software and energy management systems to potentially produce business benefits while the detailed GIS mapping can be used to support a railroad’s state of good repair. More specific train location and speed data for use in other applications such as precision dispatching could help to improve train dispatching, potentially increasing network capacity. The PTC overlay systems railroads are installing require changes to most dispatching systems to account for more precise train location information. For example, according to AAR and FRA, most railroad dispatching systems, which currently require location information within one-tenth of a mile, are being upgraded as part of PTC to require location information of up to a ten- thousandth of a mile. According to a freight railroad representative we spoke to, using the more detailed train location information from PTC could help dispatchers better prioritize train movements based on a train’s delivery schedule and better manage “meet and pass” operations (when two trains approach each other on a single track). PTC, however, is not a prerequisite for precision dispatching. For example, one freight railroad representative told us that their railroad is already using alternative means independent of PTC to enable real-time train position reporting to improve dispatching. Nonetheless, the PTC system that is being installed is also expected to provide this information. In addition, according to a supplier we included in our review, PTC could enable development of additional features, such as precision dispatching. Representatives from another supplier we spoke with said they anticipate that railroads will use PTC-generated train location information for improved dispatching in the future after the initial rollout of PTC. PTC-generated information and data also could help railroads achieve greater fuel savings than they are currently achieving with their energy management systems. An energy management system is an on-board technology that uses a variety of information, including train location and track elevation and curvature, to calculate a train’s most fuel efficient run and make throttle and braking recommendations to the operator to minimize a train’s consumption of fuel. PTC can generate information that could assist energy management systems in two ways. First, PTC systems are being designed to enforce compliance with safety parameters, such as speed restrictions, that trains encounter when traveling from origin to destination. According to an energy management system supplier and freight railroad representatives we spoke with, these parameters could be used to make train movement calculations based on the PTC safety parameters governing the route, which is information currently unavailable to such systems. Second, railroads are developing more detailed mapping of their rail networks, including its critical features such as signals and switches and putting this information into a track database as part of PTC implementation. According to an energy management system supplier we spoke with, this more precise information, which is needed for the PTC system to calculate train safety stopping distances could enhance railroads’ existing fuel-management systems’ performance through more accurate information on track features. Representatives from all of the freight railroads we spoke with reported already achieving fuel savings through energy management systems but noted that there may be potential for additional savings by integrating these systems with these PTC components. For example, one freight railroad representative reported that the railroad’s energy management system currently provides annual fuel savings of 4 to 6 percent, but that integrating the system with PTC could lead to an additional 1 to 2 percent in fuel savings. Freight railroad officials we spoke with generally expressed interest in pursuing PTC-related business benefits, but noted they are currently focused on installing PTC and are devoting their time and resources to that effort. For example, one freight railroad representative told us the railroad has not had time to fully think through how to achieve business benefits using the PTC system since all resources are currently focused on implementing PTC and noted that these benefits were incremental and could likely be achieved outside of PTC. However, railroad representatives from the four freight railroads we spoke with said they would explore ways to leverage the safety investment they are making in PTC to obtain additional business benefits once the PTC system is fully implemented and operating. These railroads emphasized that pursuing business benefits will involve additional investments beyond their current investments in PTC installation. Nevertheless, railroad representatives also identified a number of concerns about attempting to achieve business benefits through PTC systems. First, some business benefits are already being achieved through existing technologies. Second, the potential for significant PTC business benefits is still not clear. For example, according to one railroad representative, despite his railroad’s long history with PTC, it is still unsure of the potential for PTC to achieve business benefits. PTC is a new technology, and system components are still being developed. After the safety functionalities of the system have been tested and deployed, representatives will be able to determine what additional functionality (e.g., operational efficiencies) can be achieved through PTC implementation. In addition, additional functionalities to achieve PTC business benefits must be done in a way that would not compromise the system’s underlying safety functions. FRA officials told us that when integrating PTC with other systems to achieve business benefits, railroads must be careful not to compromise the integrity of PTC system’s underlying safety functions. According to a PTC supplier, delaying the introduction of any business benefit features to the PTC system may help railroads avoid complicating the initial deployment of PTC. Representatives from one freight railroad we spoke with anticipated that railroads would, with additional investment, begin to achieve business benefits through PTC over the next two decades as PTC is fully installed and operational. In the wake of the 2008 Chatsworth commuter rail accident that resulted in 25 deaths and over 100 injuries, RSIA was enacted, marking a public policy decision that rail safety warranted mandatory and accelerated PTC system installation. PTC implementation is a massive, complex, and expensive undertaking. Amid numerous implementation challenges, it appears that most railroads will not fully implement PTC by the December 31, 2015, deadline. Given the state of PTC technology and the myriad of PTC components that must seamlessly work together, the potential risks railroads may be taking in attempting to meet the deadline should be considered. Accordingly, FRA has requested additional authorities which could allow FRA to better manage its limited resources and give railroads the flexibility to take a more measured approach to PTC implementation, potentially mitigating some implementation risks. AAR and others have proposed extending the PTC implementation deadline to December 31, 2018, and agree that providing FRA with additional authorities could increase flexibility in managing PTC implementation. Given all the uncertainties in implementing PTC technology, it is not clear 2018 will be sufficient time for railroads to fully implement PTC. Consequently, Congress, the railroads, and FRA may end up in the same position they are currently in, with an impending deadline and not enough flexibility to ensure that all railroads fully implement PTC both reliably and expediently. Regardless of whether the deadline is extended for the industry as a whole or FRA is given the flexibility to grant extensions to railroads on a case-by-case basis—upon consideration of railroads’ due diligence in implementing PTC—action is needed to help FRA better manage its limited resources and address the reality of PTC implementation, which is that different railroads are at different stages. To help ensure that the Federal Railroad Administration manages its limited resources and provides flexibility to railroads in implementing PTC, Congress should consider amending RSIA as requested in the FRA’s August 2012 PTC Implementation Status Report to Congress, including granting FRA the authority to: extend the deadline on individual rail lines—when the need to do so can be demonstrated by the railroad and verified by FRA—to grant railroads incremental deadlines based on a case-by-case basis; grant provisional certification of PTC systems under controlled conditions before final system completion; and approve the use of alternative safety technologies in lieu of PTC to allow railroads to improve safety and meet many of the functions of PTC through other means. We provided a draft of this report to the Secretary of Transportation for review and comment. DOT provided technical comments, which we incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Transportation and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4431 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. This report discusses (1) how railroads are implementing positive train control (PTC) and the challenges, if any, to meeting the PTC implementation deadline; and (2) FRA’s estimates of the benefits and costs of PTC and the extent to which railroads might be able to leverage PTC technology to achieve business benefits. To obtain information about how railroads are implementing PTC and the challenges to meeting the PTC implementation deadline, we interviewed representatives from the four largest Class I freight railroads—BNSF Railway, CSX Corporation, Norfolk Southern, and Union Pacific—and Amtrak. We also interviewed representatives from seven commuter railroads: Massachusetts Bay Transportation Authority (Boston, Massachusetts) Metropolitan Transportation Authority (MTA) Long Island Railroad (New York, New York) MTA Metro-North Railroad (New York, New York) Southern California Regional Rail Authority, also known as Metrolink (Los Angeles, California) Southeastern Pennsylvania Transportation Authority (SEPTA) (Philadelphia, Pennsylvania), Utah Transit Authority (Salt Lake City, Utah) Virginia Railway Express (Washington, D.C.) We selected the commuter railroads to represent a range of geographic locations, levels of ridership, and PTC implementation status, while selecting railroads that had a mix of operations, including those operating on tracks owned by all four of the largest Class I railroads and Amtrak. We also interviewed or received written responses from representatives from selected rail supply companies (New York Air Brake, Wabtec, MeteorComm, and Parsons); railroad industry associations (the Association of American Railroads (AAR), the American Short Line and Regional Railroad Association, and American Public Transportation Association (APTA)); the Chlorine Institute; six experts; and FRA. We selected the railroad supply companies based on the types of products and services provided, railroad clients, and recommendations from FRA, associations, and experts. We selected experts based on their experience working on PTC, independence from current PTC work, and recommendations from associations and other experts. We also reviewed PTC development and implementation requirements in the Rail Safety Improvement Act of 2008 and FRA regulations; FRA’s 2012 report to Congress on Positive Train Control Implementation Status, Issues, and Impacts; and prior GAO reports. We attended the Railway Age International Conference on Communications-Based Train Control in Washington, D.C., and the National Transportation Safety Board Forum on Positive Train Control Implementation. We visited and met with officials at Southern California Regional Rail Authority, in Los Angeles, California, and Amtrak officials in Wilmington, Delaware, to witness computer simulations of PTC and view PTC track side components. In addition, we visited and met with officials at SEPTA in Philadelphia, Pennsylvania. To understand how FRA estimated the benefits and costs of PTC in its rulemakings we reviewed the 2010 and 2012 PTC rules and the supporting proposed and final regulatory impact analyses, and interviewed representatives from the FRA. To review the quality of the regulatory impact analyses, we used key elements in the OMB economic guidelines (Circular A-4) as criteria, including: use of appropriate baseline from which to estimate benefits and costs; assessment of a range of alternatives; inclusion of all key categories of benefits and costs; use of best available information in analyzing benefits and costs; and analysis of uncertainty. In addition, to better understand the potential economic effect of the rules, and the changes that FRA made in response to comments, we reviewed public comments submitted to FRA in response to the rulemakings, and we interviewed FRA officials, stakeholder groups (AAR, the Chlorine Institute), PTC technology and railroad industry experts, economists, and railway supply companies. We did not independently analyze the benefits and costs of FRA’s PTC regulations. Since the rulemaking is in response to a mandate, we focused on the information contained in the benefit cost analyses and did not comment on the overall rule. To determine the extent to which railroads might be able to leverage PTC technology to achieve business benefits we interviewed representatives from the previously mentioned Class I freight railroads, Amtrak, 7 commuter railroads, association officials, experts, railroad supply companies, and FRA to learn about plans to leverage PTC to achieve business benefits as well as existing technologies that could potentially be used to achieve business benefits. We reviewed documentation from an array of sources, including FRA, AAR, the Chlorine Institute, and PTC experts to determine the types of technology that could potentially be used to achieve PTC business benefits and the extent railroads can leverage PTC technology to achieve business benefits. FRA issued regulatory impact analyses that examined the economic impact of the implementation of RSIA and generally found that the costs far outweighed the benefits of PTC installation. Specifically, the December 2009 final regulatory impact analysis concluded that the costs to comply with the regulation far exceeded the safety benefits of PTC. The January 2012 final regulatory impact analysis evaluated the costs and benefits of the final rule (i.e., to eliminate the two risk-based tests for exempting certain rail segments from the PTC requirement) and found that the benefits, which were the costs saved by installing PTC on fewer rail lines, outweigh the costs, which were the increased risk for train incidents as a result of PTC no longer being required along 10,000 miles of track. However, this final rule did not alter the conclusions of the December 2009 analysis conclusion that the costs of PTC far outweighed the safety benefits. We reviewed FRA’s 2009 and 2012 regulatory impact analyses using OMB guidance for developing regulatory impact analyses and found that although FRA generally followed OMB guidance in assessing the costs and benefits of implementing PTC, the quality of some of the underlying data suggests some limitations in the analyses. Specifically, we found: Although FRA established a baseline and considered one alternative, an analysis of other alternatives in the implementation of PTC may have been useful. FRA analyzed uncertainty associated with cost estimates, but not with safety benefit estimates. FRA included key costs in its analysis, but excluded the cost of implementation to the government. Data and computations underlying the analysis were not clearly sourced and explained, and for some data, the quality was unclear resulting in a lack of transparency. See Table 2 for a discussion of these findings, including the OMB guidance, what FRA did in the December 2009 analysis, what FRA did in the January 2012 analysis, and our analysis. In addition to the contact named above, Sharon Silas, Assistant Director; Richard Bulman; Tim Guinane; Delwen Jones; Emily Larson; Sara Ann Moessbauer; Josh Ormond; Madhav Panwar and Crystal Wesco made key contributions to this report.
In the wake of a 2008 commuter train collision that resulted in 25 fatalities, RSIA was enacted. It requires major freight railroads, Amtrak, and commuter railroads to install PTC on many major routes by the end of 2015. PTC implementation, overseen by FRA, is a complex endeavor that touches almost every aspect of train operations on major lines. According to FRA, 37 railroads are required to implement PTC. GAO was asked to examine the status of PTC implementation. This report discusses, among other things, railroads' implementation of PTC to date and the challenges, if any, to meeting the 2015 deadline. GAO interviewed representatives from Amtrak, the four largest freight railroads, and seven commuter railroads, selected to represent a mix of locations, ridership levels, and PTC implementation status. GAO also interviewed PTC experts and suppliers, and reviewed FRA's PTC regulatory impact analyses. To install positive train control (PTC)--a communications-based system designed to prevent certain types of train accidents caused by human factors-- almost all railroads are overlaying their existing infrastructure with PTC components; nonetheless, most railroads report they will miss the December 31, 2015, implementation deadline. Both the Association of American Railroads (AAR) and the Federal Railroad Administration (FRA) have reported that most railroads will not have PTC fully implemented by the deadline. Of the four major freight railroads included in GAO's review, only one expects to meet the 2015 deadline. The other three freight railroads report that they expect to have PTC implemented by 2017 or later. Commuter railroads generally must wait until freight railroads and Amtrak equip the rail lines they operate on, and most of the seven commuter railroads included in this review reported that they do not expect to meet the 2015 deadline. To implement PTC systems that meet the requirements of the Rail Safety Improvement Act of 2008 (RSIA), railroads are developing more than 20 major components that are currently in various stages of development, integrating them, and installing them across the rail network. AAR recently reported that by the end of 2012, railroads had spent $2.8 billion on PTC implementation. To implement PTC, AAR estimates that freight railroads will spend approximately $8 billion in total while the American Public Transportation Association (APTA) estimates that commuter railroads will spend a minimum of $2 billion. Much of the work to implement PTC remains to be done. For example, AAR reported that as of the end of 2012, about a third of wayside interface units, which are needed to communicate data, had been installed and that less than 1 percent of locomotives needing upgrades had been fully equipped. Most railroads report they will not complete PTC implementation by the 2015 deadline due to a number of complex and interrelated challenges. Many PTC components continue to be in various stages of development, and in order to ensure successful integration of these components, railroads must conduct multiple phases of testing before components are installed across the network. Also, some railroads raised concerns regarding FRA's limited staff resources in two areas: verification of field tests and timely certification of PTC systems. Commuter railroads face additional challenges such as obtaining radio frequency spectrum, which is essential for PTC communications. By attempting to implement PTC by the 2015 deadline while key components are still in development, railroads could be introducing financial and operational risks. For example, officials from railroads and FRA said that without adequate testing, PTC systems might be more prone to reliability issues. To mitigate risks, provide flexibility in meeting the PTC deadline, and better manage limited resources, FRA has requested that Congress amend RSIA to provide additional authorities in implementing PTC. Specifically, FRA requested authority to extend the deadline on certain rail lines, grant provisional certification of PTC systems, and approve the use of alternative safety technologies in lieu of PTC. Flexibility in extending the deadline for certain railroads acknowledges differences in railroads' implementation schedules and may also help FRA better manage its limited resources by, for example, preventing a potential review backlog resulting from most of the railroads' submitting final safety plans at the same time--a concern raised by both freight railroads and FRA. Given the implementation challenges railroads face in meeting the deadline, and to help FRA manage its limited resources, Congress should consider amending RSIA as FRA has requested. Specifically, Congress should consider granting FRA the authority to extend the deadline on certain rail lines on a case-by-case basis, grant provisional certification of PTC systems, and approve the use of alternative safety technologies in lieu of PTC to improve safety. DOT reviewed a draft of this report and provided technical comments, which were incorporated as appropriate.
For the purposes of this report, foreign assistance is any tangible or intangible item provided by the U.S. government to a foreign country or international organization, including but not limited to any training, service, or technical advice; any item of real, personal, or mixed property; any agricultural commodity, U.S. dollars, and any currencies of any foreign country that are owned by the U.S. government. Foreign assistance has grown in complexity in recent years as the United States, through the efforts of a wide spectrum of U.S. agencies, has used foreign aid to address transforming events such as the end of the Cold War; the terrorist attacks of September 11, 2001; and the HIV/AIDS pandemic. This report focuses on bilateral foreign assistance that includes development foreign assistance programs to promote sustainable economic progress and stability; economic foreign assistance in support of U.S. political and security goals; and humanitarian foreign assistance, which primarily addresses immediate humanitarian emergencies. Provisions in the Foreign Assistance Act of 1961, as amended (FAA), and Public Law 480 (P. L. 480), are the statutory basis for existing regulations and policies for marking and publicizing most U.S. foreign assistance. Specifically, Section 641 of the FAA provides that “programs under this Act shall be identified appropriately overseas as ‘American Aid.’ “ Section 202 of P.L. 480 requires that, to the extent practicable, commodities provided under that act be clearly identified with appropriate markings in the local language as being furnished by “the people of the United States.” In addition, section 403(f) of P.L. 480 requires that foreign countries and private entities receiving P.L. 480 commodities will widely publicize “to the extent practicable” in the media that the commodities are provided “through the friendship of the American people as food for peace.” However, a major foreign assistance agency, the Millennium Challenge Corporation, which is authorized and funded under legislation other than the FAA or P.L. 480, is not subject to explicit statutory marking or publicity requirements. The 2004 Intelligence Reform Act, in establishing broad public diplomacy responsibilities for the Department of State, assigned State a coordination role regarding marking and publicizing U.S. foreign assistance and called for closer cooperation between State and USAID in these efforts. Appendix IV provides more detailed information on the statutory provisions and agencies’ policies, regulations, and guidelines for marking and publicizing U.S. foreign assistance. To better coordinate U.S. foreign assistance activities, the Secretary of State appointed a DFA in January 2006, who is charged with directing the transformation of the U.S. government approach to foreign assistance. The DFA serves concurrently as USAID Administrator, ensuring that foreign assistance is used as effectively as possible to meet broad foreign policy objectives. The DFA: Has authority over all USAID and most State foreign assistance funding and programs, with continued participation in program planning, implementation, and oversight conducted by the various bureaus and offices within State and USAID, as part of the integrated interagency planning, coordination, and implementation mechanisms; Develops a coordinated U.S. government foreign assistance strategy, including multiyear country specific assistance strategies and annual country-specific assistance operational plans; Creates and directs consolidated policy, planning, budget, and implementation mechanisms and staff functions required to provide overarching leadership to foreign assistance; and Provides guidance to foreign assistance delivered through other agencies and entities of the U.S. government, including the MCC and the Office of the U.S. Global AIDS Coordinator. Most of the agencies we reviewed involved in foreign assistance activities have established some marking policies, regulations, and guidelines. USAID has established the most detailed policies, regulations, and guidelines for marking and publicizing its assistance. State has also established marking and publicity policies for two presidential initiatives, MEPI and PEPFAR. However, USDA, DOD, HHS, Treasury, and MCC efforts in this area have been more limited. Since Justice does not have independent authority to conduct foreign assistance but implements politically sensitive programs for State and USAID, it has not established departmentwide marking or publicity policies, but allows its component agencies to determine when it is appropriate to mark and publicize their activities. Appendix IV provides the statutory provisions and agencies’ policies, regulations, and guidelines for marking or publicizing U.S. foreign assistance. To ensure that U.S. taxpayers receive full credit for the foreign assistance they provide, USAID in 2004 undertook a campaign to clearly communicate that USAID foreign assistance is from the American people. This campaign included publication of a Graphic Standards Manual containing new marking guidelines and the development of a new Graphic Identity. In January 2006, USAID revised its foreign assistance awards regulations to include new marking requirements for USAID staff and all nongovernmental organizations (NGO) receiving funding under grants and cooperative agreements. The regulations require that all programs, projects, activities, public communications, and commodities partially or fully funded by a USAID grant or cooperative agreement be marked appropriately overseas with the Graphic Standards Manual’s Graphic Identity of a size and prominence equal to or greater than the recipient’s or other donors’ logos or identities. The regulations provide for presumptive exceptions and waivers to the marking requirements. USAID’s final guidance for contractors, ADS 320, issued January 8, 2007, includes more comprehensive information on the process for preparing and approving marking plans and branding strategies in contracts and also eliminates the use of the USAID brand on NGO’s and contractors’ business cards. Also, USAID’s Food for Peace program regulations prescribe the terms and conditions governing activities under Title II of P.L. 480, including provisions for implementing the marking requirements of section 202 of that law. The regulations require that, to the maximum extent practical, public recognition be given in the media that Title II-funded commodities or foreign assistance have been “provided through the friendship of the American people as food for peace”; cooperating sponsors, to the extent feasible, display banners, posters, and similar items at project sites containing similar identifying information; and, unless otherwise specified, bags or other containers of commodities packaged for shipment be similarly marked. The regulations also require that containers of donated commodities packaged or repackaged by cooperating sponsors prior to distribution be plainly labeled with the USAID emblem, and, where practicable, with the legend, “Provided through the friendship of the American people as food for peace.” In addition, USAID has established regulations prescribing rules and procedures for the marking of shipping containers and commodities under commodity transactions financed by USAID. These regulations require that suppliers of such commodities be responsible for ensuring that all export packaging and the commodities carry the official USAID emblem, except where USAID prescribes otherwise in the case of commodities. The regulations also prescribe the manner in which the export shipping containers, cartons, or boxes are to be marked; how the USAID emblem is to be affixed to the containers; the size, design, and color of the emblem; exceptions to the requirement to affix the emblem; and waivers to the marking requirement where it is found to be impracticable. To publicize its foreign assistance, in 2004, USAID established communications guidelines and a network of over 100 communications specialists located at USAID missions around the world to promote the agency’s foreign assistance abroad. The guidelines for communications specialists delineate their role, which is to be a comprehensive resource for information regarding USAID’s work and its impact on the citizens of the host country, and provide guidance on the activities the communication specialists may undertake to fulfill this role. These outreach functions include responding to inquiries about USAID programs, collaborating with the embassy public affairs office on strategies, speech writing for the USAID mission director and others, preparing press releases, and coordinating Web site updates. According to State officials, State’s policy provides that department program managers and country ambassadors use their discretion to determine when it is appropriate to mark and publicize U.S. foreign assistance. As a result, some programs mark and publicize activities while others do not. For example, State has established guidelines for project implementers to acknowledge State’s support for two presidential initiatives that State manages: MEPI and PEPFAR. The MEPI guidelines require NGOs that implement MEPI programs to include, in all public programs and publications, standard language acknowledging the support of MEPI and State. For PEPFAR, the Office of the U.S. Global AIDS Coordinator has instructed its implementing agencies to place the PEPFAR logo on all materials procured as part of the PEPFAR initiative. However, more sensitive Department of State activities are generally not marked or publicized. For example, according to State officials, in Peru, it is embassy policy to decide on a case-by-case basis, in close consultation with the host government, the appropriate type and extent of publicity to give counter-narcotics foreign assistance activities done in partnership with the host government. In addition, State officials noted that other assistance programs, such as those focusing on counter-terrorism and weapons proliferation, are not generally marked, but these efforts may be publicized. Agencies’ efforts in establishing requirements or guidance for marking and publicizing their foreign assistance include the following: USDA has issued regulations for its Foreign Agricultural Service that establish labeling requirements for commodities donated under its program for international food for education and child nutrition. The regulations require containers of commodities packaged or repackaged by a cooperating sponsor to indicate that the commodities are furnished by the people of the United States of America; if the commodities are not packaged, the cooperating sponsor must display such items as banners or posters with similar information. The Foreign Agricultural Service also has included standard language in all its food aid agreements with its implementing partners requiring them to highlight their programs in local media in the recipient country, identify USDA as the funding source in the media and to program participants, and to recognize USDA in all USDA- funded printed material. DOD has established policy and program guidance for publicizing overseas humanitarian activities to ensure their maximum visibility and publicity. The policy and guidance provides that project planners and implementers will coordinate appropriate public affairs activities with embassy and combatant command public affairs officers, and, where appropriate, provide some tangible or visible marker of DOD involvement at the site of the activity. HHS has established its own policies related to marking and publicizing HHS activities. HHS officials told us that the agency’s departmentwide grants policy, as required by its annual appropriations acts, provides that all HHS grants recipients must acknowledge U.S. assistance when publicly describing a project. Also, HHS health projects are generally marked with the logos of HHS and the other HHS units such as the Centers for Disease Control or the National Institutes of Health that are involved in implementing the foreign assistance. HHS carries out foreign assistance programs under PEPFAR and the President’s Malaria Initiative; HHS officials stated that the Office of the U.S. Global AIDS Coordinator has instructed HHS and its operating divisions to place the PEPFAR logo on all materials procured as part of the PEPFAR Initiative. Treasury officials said they were not aware of agencywide policy on marking and publicizing foreign assistance activities. However, OTA issued its own marking policy, effective December 7, 2006, for certain types of foreign assistance provided by that office. This policy requires that the foreign assistance must be identified with the seal of the Treasury and the tagline: “From the American People.” The policy covers any material, goods, or equipment provided by OTA to foreign government agencies or central banks; any public communications intended for distribution to foreign government officials; and any training courses or conferences sponsored and financed by OTA for the benefit of foreign government officials. In addition, the policy contains presumptive exceptions for waiving the marking requirements. While MCC’s organic legislation, the Millennium Challenge Act of 2003, does not contain an explicit marking or publicity requirement for the foreign assistance it authorizes, MCC provides for such a requirement in its country compacts. MCC has distributed a marking and publicity policy that, according to agency officials, requires recipient countries and accountable entities to provide marking and publicity requirements to acknowledge the foreign assistance from MCC as being from the American people. However, Justice officials said they rely on individual Justice agencies to determine when it would be appropriate to mark and publicize their activities. Justice officials said they have not issued guidance on assistance marking and publicity, and added that most of the agency’s foreign assistance is not marked because of its sensitive nature. Some Justice officials said that they follow embassy guidance on when to mark and publicize the agency’s foreign assistance activities. For example, Justice program managers in Indonesia and Serbia told us they had received no guidance from Justice headquarters on marking and publicizing agency activities, and the program manager in Indonesia said he follows embassy guidance in determining what to mark and how to do so. To increase awareness of U.S. assistance abroad, key agencies that we reviewed used various methods to mark and to publicize some of their activities and exercised flexibility in deciding when it was appropriate to do so. These agencies used different methods of marking, or visibly acknowledging, their assistance, including applying graphic identities or logos on such things as publications and project signage. In addition, agencies generally used embassy public affairs offices for publicizing, or disseminating information about, the source of their assistance and, in some cases, augmented these efforts with their own publicity methods. USAID has established the most detailed processes of uniformly marking its assistance activities, while other key agencies either mark their assistance activities in some way, or they provide reasons for not marking some assistance. USAID has established a universal brand that conveys that the assistance is from USAID and the American people. Other agencies either use multiple logos, and in some cases, they use logos that do not convey that the agency is a U.S. entity or that the United States is the source of the assistance. After the 9/11 terrorist attacks, recognizing the connection between national security and the good will toward the United States that could be created if more accurate information about U.S. foreign assistance was widely known, agency officials determined that they should portray more complete and accurate information about USAID foreign assistance. To help focus its image abroad, USAID developed its new brand by updating a former USAID logo, and combining it with the agency brand name and a tagline, “From the American People.” Although USAID first began marking assistance over four decades ago, agency officials acknowledged that it has not always systematically or effectively marked its foreign assistance. USAID had existing standards that specified that its foreign assistance activities were to be marked, but these standards were not consistently enforced; and at times, U.S. foreign assistance was marked with the implementer’s logos and program names instead of the agency logo. Agency officials told us that it was often difficult for people to know that the foreign assistance they received was coming from the United States. USAID officials said they viewed the multiple brands used by USAID implementers as potentially confusing to recipients. However, in the past, some USAID staff believed that spending money on marking foreign assistance could take away funds from other foreign assistance activities, and therefore were reluctant to incur these costs. Also, USAID staff and implementers were concerned that communication about foreign assistance could potentially draw unwanted attention to the projects and make staff vulnerable. Figure 1 illustrates changes of USAID’s brand over time, and figure 2 illustrates the use of USAID’s current brand. While other departments and agencies also mark the foreign assistance that they provide, these efforts vary. In some cases, the markings used do not convey that the donor is a U.S. entity or that the United States is the source of the foreign assistance. State gives discretion to its department program managers and ambassadors to determine when and how it is appropriate to mark and publicize U.S. foreign assistance. Marking decisions are made at each U.S. embassy to account for the sensitive nature of the foreign assistance and the local conditions in country. State officials told us that, because State’s foreign assistance addresses a wide range of issues—such as narcotics control, international law enforcement, terrorism, weapons proliferation, non-U.N. peacekeeping operations, refugee relief, the Global AIDS Initiative, and economic support—they did not see any benefits from using a single visual image or mark. Therefore, embassies have used a number of symbols to mark their foreign assistance, including program logos, a bureau seal or unit name, the Department of State seal, or an embassy logo. State manages MEPI, and has agreements with its project implementers on how MEPI assistance, which can include publications, products, and services, is to be acknowledged. State generally leaves most decisions on when to use the program logo to its implementing organizations but specifies that, if used with logos of other cofunding organizations, the MEPI logo should not be smaller than the others. Additionally, State has developed more than one version of the MEPI logo, one of which does not include the name of either the United States or the Department of State (see fig. 3). The lack of clear marking requirements has at times created confusion with project implementers regarding the appropriate use of the MEPI logo. For example, in one instance a project implementer copied the logo without the U.S. tagline—“U.S.-Middle East Partnership Initiative”—from the MEPI Web site and used it on promotional materials, when the logo with the tagline would have been more appropriate, according to MEPI officials. In addition, a small portion of MEPI projects are implemented by USAID, and these projects follow USAID branding policy, according to an agreement between State and USAID. In other State marking efforts that clearly identified the U.S. government as the source of foreign assistance, there were differences in appearance from one mark to another. For example: In a Peruvian police training academy that prepared recruits to support narcotics eradication teams, a computer room provided by State’s Narcotics Affairs Section was marked with the unit’s initials and the U.S. and Peruvian flags (see fig. 4). In Montenegro, a U.S. foreign assistance site was marked with a sign that included the Department of State emblem and the emblem of Serbia- Montenegro with a description of the project in English and the local language. In Serbia, State foreign assistance was marked with an embassy-developed logo in which the U.S. and Serbian flags were joined to form a bridge (see fig. 5). Other agencies generally determine how to mark their foreign assistance on a program-by-program basis. For example: USDA specifies marking requirements in the programs’ grants and cooperative agreements. USDA’s food aid agreements require that the U.S. government is identified as the sources of the foreign assistance, while USDA grants and cooperative agreements that provide technical foreign assistance specify that printed materials include an acknowledgement that the United States is the source of the foreign assistance (see fig. 6). For Title II food programs managed by USAID, the USAID mark is used. DOD marks its humanitarian foreign assistance products and sites. For example, DOD’s humanitarian daily ration packages were marked with a U.S. flag and a statement that the food gift was from the people of the United States. In South Africa, a sign for a DOD humanitarian foreign assistance project was marked with the U.S. and South African flags (see figs. 7 and 8). HHS’ health projects are generally marked with the HHS logo and those of other HHS units involved in implementing the foreign assistance. For example, an HHS-developed book—which was written for use in Afghanistan and provided information on HIV/AIDS—used U.S. and Afghani flags to mark the material. It also included a recorded message in two local languages stating that the book was being provided by U.S. taxpayers (see fig. 9). The agencies we reviewed stated that when making decisions on whether or how to mark foreign assistance, they exercise flexibility to allow for variations in the nature of foreign assistance, risks to implementers, or other special circumstances that foreign assistance activities may entail. Some of these activities are more readily marked than others. Moreover, circumstances may occur when U.S. foreign assistance marking may need to be modified or withheld due to safety, political, or other concerns, such as concerns associated with advising high-level government officials or providing foreign assistance in volatile issue areas such as narcotics control. Also, at certain times, such as before elections, marking of foreign assistance activities may be suspended to remove any association of U.S. foreign assistance with certain issues—such as the connection between funding a health clinic and the issue of reproductive health. In other cases, marking may be withheld to ensure the local government’s ownership of the programs is not called into question. USAID and OTA have established a process for determining when to modify its marking requirements to allow for the differences in the nature of foreign assistance projects and special circumstances that may be related to foreign assistance implementation. USAID’s marking regulations identify a number of conditions under which the agreement officer can consider approving exceptions to marking requirements. For example, in Serbia, in order to not compromise the perceived neutrality of program activities and diminish the credibility of materials produced during the course of the project, USAID approved exceptions to marking requirements for certain activities associated with a civil society project in public policy advocacy and reform. USAID regulations also allow for the possibility that, political, safety, or security conditions could warrant a request to the mission director or the most senior USAID officer at the mission for a full or partial waiver of the marking requirements. For example, in Indonesia, the mission director approved a waiver of the marking requirements for a project designed to demonstrate democracy’s compatibility with Islam because of threats from religious fundamentalists to the safety of the individuals involved in the project. In December 2006, OTA had formalized its guidance on determining when marking requirements for a particular project should be modified or suspended. While this guidance states that much of OTA’s work that includes oral advice or technical assistance provided to foreign governments and central banks is not marked, its rules for marking any commodities, public communications, or training courses provided by OTA may be waived in writing by the OTA Director or designee for conditions that include safety or security concerns, adverse political impact, and potential compromise of the intrinsic independence of a program or materials such as public service announcements. The U.S. ambassador, as chief of mission, has authority over all U.S. government activities in a foreign country, and the embassy public affairs office publicizes U.S. foreign assistance activities through press releases, Web sites, and speeches by U.S. officials. To enhance publicity of its foreign assistance programs, USAID has also, as mentioned earlier, established a network of communications specialists to increase awareness of these programs in the host country. At the time of our field visits, the public affairs officers and USAID communication specialists were still defining their roles in publicizing U.S. foreign assistance. For example, the ambassador in Liberia and the public affairs officer in Indonesia expressed the opinion that all U.S. foreign assistance should be publicized by the embassy public affairs sections and did not see the need for separate USAID communications specialists. Following are some examples of foreign assistance publicity efforts conducted by the embassy in the countries we visited. In Indonesia, in fiscal year 2003, the public affairs office developed a program to enhance media coverage of U.S. assistance and publicized 11 assistance projects. In February 2006, the embassy issued a press release on the distribution of books and school supplies funded by the United States to Indonesian school children. The distribution, done in cooperation with two leading Islamic organizations, supported the mutual goal of improving education and highlighted shared values between the two countries. In Liberia, in June 2006, the embassy issued a press release on the launching of a USAID funded radio teacher training program. In Peru, in June 2006, the public affairs office issued a press release on joint U.S.-Peruvian military exercises, which included DOD humanitarian foreign assistance to construct health clinics, done in conjunction with the exercises. These efforts were publicized to dispel citizens’ anxiety over U.S. military exercises in that country. However, because of the sensitivity of some other activities in Peru, according to State officials, it is embassy policy to decide on a case-by-case basis, in close consultation with the host government, the appropriate type and extent of publicity to give counter-narcotics foreign assistance activities done in partnership with the host government. In Serbia, the embassy public affairs office has issued press releases on U.S. foreign assistance provided by USAID, State, USDA, DOD, Justice and other agencies. For example, in April 2006, the embassy issued a press release on a Justice-implemented program to support the organized crime and war crimes specialized institutions. In South Africa, the public affairs office has issued press releases on U.S. foreign assistance provided by USAID, State, HHS, MCC, and other agencies. For example, in January 2006, the embassy issued a press release on a HHS -implemented HIV vaccine research initiative. In 2004, USAID established and trained a network of development outreach and communications specialists to enhance the skills of officers who handle public outreach and media and improve coordination among USAID staff, foreign assistance implementing partners, and the embassy public affairs sections. An assessment of public diplomacy in the Muslim world, issued in 2003 by the Advisory Group on Public Diplomacy for the Arab and Muslim World, concluded that too few people knew the extent of USAID’s activities and recommended closer integration of the public diplomacy activities of agencies that administer foreign assistance. The communications specialists are responsible for publicizing USAID foreign assistance (1) by developing public outreach and media materials and strategies and (2) by providing general communications support through writing, media relations, Web site development, and review of foreign assistance proposals. These specialists also work with public relations staff hired by foreign assistance implementing organizations to support them in addressing community relations issues and publicizing their projects. USAID has now placed these specialists at most missions; a few large missions have been assigned more than one communication specialist, while at a few small missions, program officers have been asked to perform these tasks. The communication specialists’ resources vary based on individual USAID missions’ decisions on how to fund their work and whether USAID headquarters has provided additional funds for communication pilot activities. Following are examples of initiatives communications specialists have carried out. A pilot communication campaign project in Indonesia, which was funded by USAID headquarters, involved communications officers overseeing the development and production of a radio, TV, and print advertisement campaign that focused on health care, education, and economic growth partnerships between American and Indonesian people. The purpose of this and other communication campaign pilots was to identify effective practices in foreign assistance publicity. In Peru, communications specialists worked with implementing organizations to develop and distribute—for eventual broadcast on regional television stations—a video of a major U.S. alternative developmentforeign assistance project, which involved building a road in northern Peru to provide farmers with greater access to markets. On another project, a communication officer was contacted by television producers who were preparing a video about an ecological project that had received USAID foreign assistance funding. At the communications officer’s suggestion, the producers interviewed the USAID mission director to highlight how USAID supported the project. The final film was shown on television. In Serbia, two newly hired communication specialists redesigned a Web site and, subsequently, developed questions on public awareness of USAID’s foreign assistance activities that were incorporated into the embassy’s public opinion poll. According to mission officials in South Africa, the outreach efforts of the communications specialist there have resulted in an improved perception among the local population of USAID/South Africa programs, which were previously hampered by negative comments made by high-level South African government officials in the late 1990s. In addition, the communications specialist conducted five training workshops, primarily for PEPFAR partners, on how to write stories of successful projects. The workshops resulted in more than 40 stories submitted by implementing partners, which were posted on various U.S. government Web sites and in publications. This effort was also sanctioned by the embassy public affairs section. We identified some challenges to marking and publicizing U.S. foreign assistance that may result in missed opportunities to increase public awareness of U.S. foreign assistance. First, little reliable work has been done to assess the impact of U.S. assistance on foreign citizens’ awareness of that assistance. Second, although the newly appointed DFA has begun to develop governmentwide guidance for marking and publicizing all U.S. foreign assistance, it is unclear to what extent this policy will be implemented by agencies whose foreign assistance programs are not under State’s direct authority. State conducts some research on public perceptions of the United States and its foreign assistance activities. State’s Bureau of Intelligence and Research conducts approximately 120 surveys per year in about 80 countries, according to a State official. However, these surveys focus on tracking trends in the foreign public’s perception of the United States to serve U.S. public diplomacy efforts and do not assess public awareness of U.S. foreign assistance activities or the effectiveness of publicity activities. Some individual embassies perform surveys of public attitudes and awareness relating to U.S. foreign assistance activities. For example, the surveys commissioned by the embassy in Serbia and Montenegro attempt to measure public awareness of foreign assistance programs in addition to measuring public perception of the United States. However, the surveys do not attempt to link any foreign assistance programs to the level of awareness, but instead track changes in the level of awareness for a given period of time. USAID also conducts some research. The agency requires that its communications specialists develop a communications strategy that includes methods to measure impact, and USAID’s communications manual encourages communications specialists to monitor local media coverage and obtain and analyze locally conducted polls as a means to measures results. The agency has contracted with polling firms to conduct eight public opinion surveys in various locations overseas—including one survey in Egypt, two in Indonesia, one in Jordan (along with a focus group), one in Colombia, and three in the West Bank and Gaza. According to a USAID official, these surveys were designed to test different methods for conducting broad-based public affairs campaigns. The surveys included questions to assess (1) the extent of awareness of USAID and U.S. foreign assistance; (2) attitudes toward USAID and U.S. foreign assistance among recipients of that foreign assistance; and (3) which communication sources, ranging from billboards and magazines to television and the Internet, may be most effective in reaching target audiences. Although each of the USAID surveys we reviewed provide information about the extent of awareness of USAID and U.S. assistance, the surveys in Colombia, Egypt, Jordan, and the West Bank and Gaza were not designed to compare pre- and post-campaign levels of awareness. A USAID official agreed that pre- and post-branding measurement of public opinion was important to measure the impact of USAID’s branding activities, know which branding activities were most effective, and use the lessons learned to improve USAID’s branding activities. Recently, USAID has begun to provide some guidance to communications specialists responsible for managing research programs. USAID hired a contractor to train communications specialists on public opinion polling. The training instructs communications specialists on issues such as the importance and benefits of polling, types of polling, the most effective ways to deliver messages, principles of sampling in polling, and how to hire a qualified agency to conduct the polls. Also, USAID officials said they are developing a manual to provide guidance on communications research instruments, primarily focused on polling. The manual will include key criteria for evaluating the quality of the research instruments and a standard set of questions to include in research instruments. Ad Council executives whom we met with emphasized that successful quantitative research, such as surveys, to measure results of efforts are key practices they use in their public service campaigns. Also they conduct pre- and post-tracking studies to benchmark attitudes and behaviors. In addition, they examine best practices, including areas where the practice has worked well and learned how to emulate them. In addition, these executives examine cases where their efforts have yielded poor results and implement some policies that could alleviate the situation. See appendix II for additional key practices identified by the Ad Council executives. While some agencies have established policies, regulations, and guidelines on marking and publicizing U.S. foreign assistance, we found that USAID missions and all federal agencies and presidential initiatives providing assistance overseas have not received clear and consistent direction on marking and publicizing U.S. foreign assistance. During our field visits to five countries between May and August 2006, we found that three of the five embassies lacked specific guidance that addresses assistance publicity. Embassy Mission Performance Plans are the means by which an embassy aligns its plans, programs, and resources with the U.S. government’s international affairs strategy, including publicizing foreign assistance. Only one Mission Performance Plan—for Serbia and Montenegro—listed foreign assistance publicity as an embassy priority and established that the embassy would increase its outreach activities and aggressive advertising of U.S. foreign assistance by (1) improving media coverage, (2) coordinating public diplomacy activities at the mission to improve synergy and publicity of foreign assistance programs, and (3) using polling and focus group information to help direct these efforts. The embassy was also planning to expand exchange programs that would bring individuals from Serbia and Montenegro to the United States. The ambassador said that he became aware that U.S. foreign assistance was not widely known in Serbia and Montenegro after he arrived at the mission and saw that implementing partners often used project logos that did not clearly communicate that the foreign assistance was from the United States. He identified a need to more clearly portray U.S. foreign assistance and made it a priority for the embassy. In addition, the Mission Performance Plan for Liberia called for publicizing U.S. efforts to rebuild security services and promote respect for human rights. In the five countries that we visited, we also found that assistance is publicized by public affairs officers on an ad hoc basis and, as a result, embassies may miss opportunities to publicize their foreign assistance activities. For example, in Indonesia, the USDA attaché told us that an exchange program that brings agricultural specialists to the United States for training has not been publicized by the embassy because the public affairs officer was not aware of it. Also in Indonesia, the public affairs officer almost missed a publicity opportunity when the officer was initially opposed to issuing a press release on an event to promote a teacher- training program that was attended by representatives of an NGO and 15 Indonesian institutions, because communicating about the program was not an embassy priority. The public affairs officer later reconsidered and issued a press release. Moreover, agencies at embassies may receive conflicting guidance on marking their assistance activities when an agency’s headquarters position on marking differs from an embassy’s position. After USAID headquarters developed its logo in 2004, the Serbia and Montenegro embassy developed a logo (featuring the American and Serbian and Montenegrin flags) and encouraged all agencies to use it. Most agencies used the logo to mark and publicize their foreign assistance activities, and the logo was also used on the embassy Web site. Subsequently, the USAID mission developed and used a logo that combined the embassy logo and USAID’s logo. Although the USAID mission’s logo was different from the embassy logo, the ambassador agreed to the compromise, and USAID used that logo to mark and publicize its assistance activities. However, in June 2006, USAID headquarters told the mission that this new logo violated USAID standards and required the mission to discard that logo and use USAID’s standard logo. The DFA has acknowledged that the lack of governmentwide guidance on marking and publicizing foreign assistance activities limits agencies’ ability to make recipients aware of the extent of U.S. assistance. In July 2006, the DFA requested that his office and the Bureau for Public Diplomacy and Public Affairs work together to ensure that U.S. embassies, USAID missions, and all government agencies receive clear and consistent guidance on marking and publicizing U.S. assistance. Also, the DFA recommended that all foreign assistance be unified under one agency- neutral brand that would ensure that the assistance is recognized and associated with the United States. The DFA and the Under Secretary of State for Public Diplomacy are currently developing a proposal to provide guidance to all federal agencies in 2007. Despite these efforts to develop governmentwide guidance for marking and publicizing all U.S. foreign assistance, it is unclear to what extent this guidance will be implemented by agencies whose foreign assistance programs are not under the DFA’s direct authority. According to DFA officials, the DFA has budget authority over USAID, most State foreign assistance activities, and activities of agencies funded by State or USAID. Also, the DFA will have authority to coordinate the activities of some foreign assistance activities managed by other agencies, because, according to DFA officials, any activities funded by USAID or State that are implemented by other agencies will fall under the authority of the DFA. For example, the DFA will have the authority to coordinate some of the technical foreign assistance and training programs administered by the Department of Justice that are funded by State. However, according to DFA officials, the director’s office will not have authority over about 20 percent of all U.S. foreign assistance. This includes some of State’s programs, such as State’s Office of U.S. Global AIDS Coordinator, which is funded separately from the Department of State budget—though DFA officials told us that the DFA’s office has reached an oral agreement with the AIDS Coordinator to coordinate their activities. In addition, DFA has no authority over, for example, DOD, HHS, USDA, Treasury, and MCC activities that are funded by sources other than State or USAID. Some key U.S. agencies providing foreign assistance have established policies, regulations, and guidelines on marking and publicizing U.S. foreign assistance, and some have used varied methods to implement these requirements. Despite these efforts, the United States lacks reliable information to assess the impact of marking and publicity on increasing awareness of U.S. assistance. According to U.S. public service awareness campaign executives with whom we met, quantitative research that includes pre- and post-tracking studies—as well as drawing from lessons learned regarding which types of approaches are working more effectively than others—are key practices that they use in measuring the impact of their awareness campaign. Although State’s public opinion polls measure general public opinion trends, they do not specifically provide information on the impact of the U.S. government’s overall efforts to increase public awareness of U.S. foreign assistance activities. USAID has only completed a limited number of surveys to measure public awareness of U.S. assistance, including a public opinion survey of U.S. post-tsunami efforts in Indonesia. According to USAID officials and USAID surveys, marking and publicizing the source of U.S. foreign assistance following the December 2004 tsunami likely contributed to increasing favorable public opinion about the United States in Indonesia. USAID has begun to develop guidance on measuring the effectiveness of its publicity efforts. In addition, the DFA acknowledges that because there is no governmentwide guidance on marking and publicizing assistance, there may have been missed opportunities to increase recipient awareness of the extent of U.S. foreign assistance. To address this issue, the DFA plans to establish marking and publicizing guidance for all U.S. agencies providing assistance abroad in 2007. However, obtaining the cooperation of those agencies implementing foreign assistance programs not under the DFA’s direct authority is critical to a successful U.S. governmentwide marking and publicizing approach and remains a challenge. To help the United States ensure that recipients of its foreign assistance are aware that this assistance is provided by the United States and its taxpayers, we are making two recommendations. To enhance U.S. marking and publicity efforts, and to improve the information used to measure the impact of U.S. marking and publicizing programs, we recommend that the Secretary of State, in consultation with other U.S. executive agencies, develop a strategy, which appropriately utilizes techniques such as surveys and focus groups, to better assess the impact of U.S. marking and publicity programs and activities on public awareness. To facilitate State’s effort to implement its planned governmentwide guidance for marking and publicizing all U.S. foreign assistance programs and activities, we recommend that the Secretary of State, in consultation with other U.S. executive agencies, establish interagency agreements for marking and publicizing all U.S. foreign assistance. We provided a draft of this report to USAID, State, Agriculture, DOD, HHS, Justice, the Treasury, and MCC. We obtained written comments from State (see app. V). State concurred with our recommendations and indicated that a Policy Coordination Committee formed by the Under Secretary of State for Public Diplomacy in the National Security Council plans to develop a governmentwide Strategic Communications Plan that will address assessment of marking and publicity programs and will develop governmentwide marking and publicity guidance. We also received technical comments on this draft from USAID, State, DOD, and MCC, which we incorporated where appropriate. We are sending copies of this report to interested congressional committees, USAID, the Departments of State, Agriculture, Defense, Health and Human Services, Justice, the Treasury, and Millennium Challenge Corporation. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4268. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. To determine the regulations and policies that agencies have established to mark and publicize foreign assistance, we analyzed legislation establishing the statutory basis for marking and publicizing foreign assistance, including legislation providing funding for foreign assistance activities, and organic legislation establishing the various foreign assistance agencies. We reviewed major foreign assistance legislation including the Foreign Assistance Act of 1961 and Public Law 480. We also reviewed the Intelligence Reform Act of 2004, which assigns the Department of State (State) a coordination role in publicizing foreign assistance, as well as legislation that authorized foreign assistance programs separate from the Foreign Assistance Act, such as the Millennium Challenge Act of 2003 that established the Millennium Challenge Corporation (MCC). We discussed these laws with agency officials at the United States Agency for International Development (USAID), State, the Departments of Agriculture, Defense (DOD), Health and Human Services (HHS), Justice, the Treasury, as well as MCC who are responsible for their implementation as it applies to marking and publicizing their foreign assistance activities. In addition, we reviewed regulations these agencies had established to implement legislative marking requirements related to agencies’ policies and other relevant documents. We also discussed the agencies’ regulations and policies with cognizant officials at each agency. To determine how USAID, State, and other agencies are marking and publicizing their foreign assistance activities, we discussed their activities with cognizant officials at their headquarters in Washington, D.C. We also met with representatives of nongovernmental organizations (NGO) and contractors in the Washington, D.C., area—such as Food For The Hungry, National Democratic Institute, the American Bar Association/Central European and Eurasian Law Initiative, Chemonics, and Development Alternatives, Inc.—who implement many of the agencies’ foreign assistance activities. In addition, we traveled to U.S. embassies and USAID missions in Indonesia, Peru, Serbia, Montenegro, and South Africa. In those countries, we (1) met with agency officials at the embassies and missions and representatives of NGOs and contractors who implement foreign assistance activities; (2) collected and analyzed agency documents, regarding their foreign assistance marking and publicizing efforts; and (3) visited several sites in each country to observe marking and publicizing activities of U.S. agencies and their implementing organizations. In particular, we visited various recipients of U.S. foreign assistance in Belgrade, Serbia; Podgorica, Montenegro; and Pretoria, South Africa. We also traveled to Banda Aceh, Indonesia, and several locations in Serbia, Montenegro, and Peru, to observe marking and publicizing activities and discuss those activities with U.S. government officials, representatives of implementing organizations, and recipients of foreign assistance. We also traveled to Liberia and met with embassy and USAID mission officials. To determine the challenges the United States faces in marking and publicizing foreign assistance activities, we met with cognizant agency officials in Washington, D.C., and the U.S. embassies and USAID missions in Indonesia, Peru, Serbia, Montenegro, and South Africa. We also analyzed agency documents, including Mission Performance Plans and other policy and guidance documents at headquarters, U.S. embassies, and USAID missions in Indonesia, Peru, Serbia, Montenegro, and South Africa. In addition, we analyzed surveys and polls conducted for USAID and State, communications manuals, and training materials used as part of USAID’s and State’s efforts to determine foreign audiences’ opinions about the United States and their awareness of U.S. foreign assistance activities. Further, we discussed those surveys and polls with cognizant agency officials in USAID and State. To determine foreign government organizations’ marking and publicity practices, we held discussions with representatives of international foreign assistance organizations, including the Australian Government’s Overseas Aid Program (AusAID), the Canadian International Development Agency (CIDA), the Department for International Development (DFID) of the United Kingdom, the European Union (EU) assistance implemented through the European Agency for Reconstruction, German Agency for Technical Cooperation (GTZ), the Icelandic International Development Agency (ICEIDA), and the Japan International Cooperation Agency (JICA). We also reviewed relevant documents provided by these organizations on their marking and publicity practices, including guidelines on marking and communications. We included in the scope of this review foreign assistance programs administered or implemented by USAID, State, Agriculture, DOD, HHS, Justice, the Treasury, and MCC. Among these entities, we included programs in the following categories—bilateral development foreign assistance; humanitarian assistance; and economic assistance in support of U.S. political and security goals, with the exception of payments to support countries involved in the Middle East Peace Process, including countries of importance in the war on terrorism, and programs that address issues of weapons proliferation. We excluded from the scope of this review multilateral economic contributions or payments that are combined with funds from other donor countries to finance multilateral development projects of international organizations that include the United Nations, the World Bank, and other multilateral development banks. We also excluded military foreign assistance provided to help selected countries acquire U.S. military equipment and training. We conducted our work from December 2005 through January 2007 in accordance with generally accepted government auditing standards. Ad Council executives with whom we met with identified key practices that they use to guide their public service campaigns. These practices relate to areas that we assessed as part of our examination of U.S. government efforts to mark and publicize foreign assistance. Specifically, the key practices identified include the following: Determine what is appropriate to be marked. It is important to maintain flexibility and conduct research to ensure that efforts do more good than harm. Be mindful of potential unintended effects of branding. Maintain a simple message. Conduct research regarding sensitivity of wording and language. Articulate the universal truth or message differently, as appropriate for specific demographics and international backdrops. It is critical to separate programs from political issues to prevent programs and policies from being linked together. Use targeting or tailoring to help create connection with audience. Examine best practices, identify where the practices have worked well, and learn how to emulate them. Also examine cases where branding has been ineffective and implement some policies that could alleviate the situation. Conduct quantitative research, such as surveys, to measure results of efforts. Conduct pre- and post-tracking studies to benchmark attitudes and behaviors. While other governments’ donor organizations and the German Agency for Technical Cooperation (GTZ) generally mark and publicize their assistance, none of these organizations have undertaken a campaign to develop a mark for their foreign assistance abroad on the scale that USAID has to date. Like the United States, other governments’ organizations generally use marking to gain recipient recognition for their contributions. We found that the six donors and the GTZ generally had some form of marking implementing policies and procedures. However, unlike most U.S. foreign assistance publicizing efforts, other government organizations’ efforts to communicate about foreign assistance were more targeted toward their own constituents rather than host country citizens. These organizations generally do not formally monitor or measure their marking or publicity efforts. Other governments’ donor organizations and the GTZ that we studied generally use marking to gain recognition or maintain domestic support for their contributions. Donor organizations and the GTZ that we contacted identified a number of practices for marking, including adapting such efforts for each host country, and ensuring high-level stakeholder cooperation to facilitate implementation of these marking efforts. Additionally, in some countries, to ensure marking consistency, governments require all organizations, including bilateral donors, to use the national symbol along with the organization’s logo. For example, the Australian Government’s Overseas Aid Program (AusAID) and the Canadian International Development Agency (CIDA) marks include the national symbols of Australia and Canada, respectively, and assistance provided by the European Union (EU) are generally marked with the EU logo. Other organizations, including the Department for International Development (DFID) of the United Kingdom, GTZ, the Icelandic International Development Agency (ICEIDA), and the Japan International Cooperation Agency (JICA) have their own unique organization logos. In addition, some donors are more flexible than others with their marking requirements. For example, according to representatives of ICEIDA, their marking procedures are not mandatory, but implementing partners often use their marks on publications. Conversely, AusAID requires its partners to mark all of its assistance. Figure 10 illustrates each of the selected organizations’ marks. The marking efforts for the six donors and the GTZ that we reviewed have some form of procedures or guidance for implementation. The following provides a brief description of each organization’s procedures or guidance. AusAID’s standard mark is used on its foreign assistance. The organization’s written guidelines apply to all contractors and implementing partners to ensure marking consistency. The Australian government has also developed a unique mark and design manual for its Indonesia program. CIDA has a corporate identity logo, and grant and contribution agreements require recipients to recognize CIDA’s contribution with acknowledgments or use of CIDA’s logo in their publications, advertising, and promotional products. DFID has a standard mark, which is guided by its Identity Standards Manual. The mark is used mostly for project-based foreign assistance and not budgetary support programs or activities. Humanitarian assistance is branded with the United Kingdom Emergency Aid logo. This branding applies to DFID staff and to large nonperishable items. It will not be used when it may detract from humanitarian operations or increase risk to staff or beneficiaries. The European Union has Visibility Guidelines specifying how technical foreign assistance, supplies and equipment, and infrastructure projects are to be marked. GTZ uses a standard logo and a tagline, “German Technical Cooperation” on its information material in partner countries. An optional slogan, “Partner for the Future. Worldwide” may be used. Although not required, the tagline is frequently used on project information, brochures, products, and presentations, and, in partner countries, the name of the country is included. ICEIDA uses a standard logo on all of its publications. Although not required, implementing partners often include the mark on their publications. JICA has a logo that is to be used on publications, business cards, envelopes, and vehicles. JICA also has a slogan, “For a better tomorrow for all.”, and has translated this into English, French, Spanish, Portuguese, and Russian. A Corporate Identity Design Manual was produced in 2003 that provides color, font, and usage guidance. According to representatives of most of the organizations with whom we spoke, domestic constituents and not foreign audiences are the target of their communications about foreign assistance efforts. In contrast to marking intended to ensure that governments receive recognition for their contributions, these organizations’ publicizing efforts generally focus on informing the general public in their respective countries about their initiatives to enhance the reputation of the aid agencies, engage the public, create interest among civil society, and highlight success stories. According to ICEIDA, for example, it is required by law to publicize its foreign assistance efforts domestically. The organizations implement these efforts by, among other things, coordinating publicity activities between implementers’ and donors’ information units; constructing project Web sites; and through other communication mechanisms such as special events, press releases, conferences, publications, Web pages, and plaques. Two of the organizations required that their projects have a communications plan targeting recipient countries. For example, CIDA requires a communications plan on how to inform the public in the recipient country of their projects prior to approval. Only one of the organizations, AusAID, told us that it monitors implementation of its marking and publicity efforts domestically and internationally. AusAID monitoring is done through (1) domestically focused community awareness research and (2) a mix of qualitative and quantitative measures, including press releases, special events, correct markings or signage, and newsletter subscriptions. AusAID also attempts to determine the quality of its relationships with its partners and assesses who the assistance is reaching and how often. Representatives from CIDA and DFID told us that they conduct public opinion surveys, but these surveys are intended to gauge public opinion about the agency or support for assistance in general and not to measure marking or publicity efforts. Provides that “programs under this Act shall be identified appropriately overseas as ‘American Aid.’ ” HHS: Follows State policy on placing PEPFAR logo on all materials procured by HHS; policy memorandum on the appropriate use of logos on conference material; policy on marking health projects. Justice: Relies on individual component agencies to determine appropriateness of marking or publicizing activities. State: State Financial Assistance Standard Terms and Conditions, Part II, Attachment 1-MEPI. Treasury: OTA Instruction 4035.1—guidance for marking certain types of assistance. USAID: 22 C.F.R. Part 226, sec. 226.91— regulations prescribing marking requirements for grants and cooperative agreements; 22 C.F.R. Part 201, sec. 201.31(d)—regulations regarding marking shipping containers and commodities; AIDAR Clause 752.7999—-standard clause in contracts regarding marking of foreign assistance; ADS 320-policy directives and procedures on marking USAID-funded activities; AAPD 05-11—policy directive regarding acquisition and assistance regulations and procedures. Requires that, to the extent practicable, commodities provided under that act be clearly identified with appropriate markings in the local language as being furnished by “the people of the United States.” USDA: 7 C.F.R. Part 1599, sec. 1599.12(b)— regulations on labeling of commodities donated under USDA’s international food education and child nutrition program. USAID: 22 C.F.R. Part 226, sec. 226.91— regulations prescribing marking requirements for grants and cooperative agreements; 22 C.F.R. Part 211, sec. 211.5(h)—regulations prescribing marking and publicity requirements for USAID’s Food for Peace program; 22 C.F.R. Part 201, sec. 201.31(d)— regulations regarding marking shipping containers and commodities. Requires that foreign countries and private entities receiving P.L. 480 commodities will widely publicize “to the extent practicable” in the media that the commodities are provided “through the friendship of the American people as food for peace.” USDA: 7 C.F.R. Part 1599, sec. 1599.12(b)— regulations on labeling for commodities donated under USDA’s international food education and child nutrition program. USAID: 22 C.F.R. Part 211, sec. 211.5(h)—regulations on marking and publicity requirements for USAID’s Food for Peace program. Directed the Secretary of State to coordinate the public diplomacy activities of federal agencies and called for cooperation between State and USAID to ensure that information related to U.S. foreign assistance is widely disseminated. MCC: Standards for Global Marking—guidelines on the use and placement of MCC logo or other appropriate logos. DOD: Policy and Program Guidance for Overseas Humanitarian, Disaster, and Civic Aid Activities—policy and guidance for overseas humanitarian, disaster, and civic aid activities and assistance. Jess Ford, (202) 512-4268 or [email protected]. Zina Merritt served as Assistant Director responsible for this report, and Maria Oliver was the Analyst-in-Charge. In addition to those named above, the following individuals made significant contributions to this report: Virginia Chanley, Lauren Geetter, Ernie Jackson, and James Strus. The team benefited from the expert advice and assistance of Joe Carney, Etana Finkler, Lessie Burke-Johnson, Cynthia Taylor, and Wilda Wong.
The negative perceptions of the United States associated with U.S. foreign policy initiatives have underscored the importance of the United States presenting a complete portrayal of the benefits that many in the world derive from U.S. foreign assistance efforts. Congress has expressed concerns that the United States has frequently understated or not publicized information about its foreign assistance programs. As requested, this report (1) describes the policies, regulations, and guidelines that agencies have established to mark and publicize foreign assistance; (2) describes how State, USAID, and other agencies mark and publicize foreign assistance; and (3) identifies key challenges that agencies face in marking and publicizing foreign assistance. Most of the agencies we reviewed involved in foreign assistance activities have established some marking and publicity requirements in policies, regulations, or guidelines. USAID has the most detailed policies and regulations relating to marking and publicity. USAID has also established a network of communications specialists at its missions to publicize its assistance efforts and has issued communications guidelines to promote that assistance. According to State officials, its policy is to allow its program managers and ambassadors to use their discretion when determining which programs and activities to mark or publicize. USDA, DOD, HHS, Treasury, and MCC have also established some policies for marking and publicizing assistance, though these policies vary in their level of formality and detail. To increase awareness of U.S. assistance abroad, key agencies that we reviewed used various methods to mark and publicize some of their activities and exercised flexibility in deciding when it was appropriate to do so. These agencies used different methods of marking, or visibly acknowledging, their assistance. In addition, agencies generally used embassy public affairs offices for publicizing, information about the source of their assistance and, in some cases, augmented these efforts with their own publicity methods. We identified some challenges to marking and publicizing U.S. foreign assistance, including the lack of (1) a strategy for assessing the impact of marking and publicity efforts on increasing the awareness of U.S. foreign assistance and (2) governmentwide guidance for marking and publicizing U.S. foreign assistance. First, although some agencies conduct surveys in recipient countries that primarily capture information on public opinion of the United States, little reliable work has been done to assess the impact of U.S. assistance on foreign citizens' awareness of the source of U.S. provided assistance. Second, while the newly appointed Director of Foreign Assistance has begun to address the issue of developing a governmentwide policy for marking and publicizing all U.S. foreign assistance, it is unclear to what extent this policy will be implemented by agencies whose foreign assistance programs are not under DFA's direct authority.
U.S. relations with the FAS began when American forces liberated the islands near the end of World War II. In 1947, the United Nations assigned the United States administering authority over the Trust Territory of the Pacific Islands, which included what are now the Republic of Palau, the FSM, and the RMI. Interior’s Office of Insular Affairs (OIA) has primary responsibility for monitoring and coordinating all U.S. assistance to the FAS, and the Department of State is responsible for government-to-government relations. All three compacts give the United States responsibility for the defense of the FAS and provide the United States with exclusive military use rights in these countries. According to the Department of Defense, the compacts have enabled it to maintain critical access in the Asia–Pacific region. In 2014, Palau had the smallest population of the three nations, but its per capita gross domestic product (GDP) was about four times greater than the FSM’s or the RMI’s (see table 1). Economic growth has varied among the three nations. After adjustment for inflation, per capita GDP in the FSM was unchanged from 2004 to 2014 but grew by 11 percent in the RMI and 8 percent in Palau. The U.S. and Palau governments concluded their Compact of Free Association in 1986, and the compact entered into force on October 1, 1994. Key provisions of the Palau compact address the sovereignty of Palau, types and amounts of U.S. assistance, security and defense authorities, and periodic reviews of compact terms. (See app. I for a table summarizing the key provisions of the Palau compact.) In fiscal years 1995 through 2009, the United States provided about $574 million in compact assistance to Palau, including $70 million to establish Palau’s compact trust fund and $149 million for road construction. In addition, U.S. agencies—the Department of Education, the Department of Health and Human Services (HHS), and Interior, among others—provided assistance to Palau through discretionary federal programs as authorized by U.S. legislation and with appropriations from Congress. On September 3, 2010, the governments of the United States and Palau reached an agreement to extend U.S. assistance to Palau, totaling approximately $216 million in fiscal years 2011 through 2024. The planned assistance included extending direct economic assistance to Palau, providing infrastructure project grants and contributions to an infrastructure maintenance fund, establishing a fiscal consolidation fund, and making changes to the compact trust fund. The 1986 Compact of Free Association between the United States, the FSM, and the RMI provided a framework for the United States to work toward achieving its three main goals: (1) to secure self-government for the FSM and the RMI, (2) to assist the FSM and the RMI in their efforts to advance economic development and self-sufficiency, and (3) to ensure certain national security rights for all of the parties. The second goal of the compact—advancing economic development and self-sufficiency for both countries—was to be accomplished primarily through U.S. direct financial payments (to be disbursed and monitored by Interior) to the FSM and the RMI. Under the 1986 compact, U.S. assistance to the FSM and the RMI to support economic development was estimated, on the basis of Interior data, at about $2.1 billion in fiscal years 1987 through 2003. In addition, other U.S. agencies provided assistance to the FSM and RMI in the form of grants, services, technical assistance, and loans. In 2003, the United States approved separate amended compacts with the FSM and the RMI. The amended compacts provide for direct financial assistance to the FSM and the RMI in fiscal years 2004 through 2023, decreasing in most years, with the amount of the decrements to be deposited in trust funds for the two nations established under the amended compacts. The amended compacts’ enabling legislation authorized and appropriated funds for the compact trust funds. The trust funds are to contribute to the economic advancement and long-term budgetary self-reliance of each government by providing an annual source of revenue after fiscal year 2023. After the grants end in fiscal year 2023, trust fund proceeds are to be used for the same purposes as grant assistance, or as mutually agreed, with priorities in education and health care. (See app. II for further information about planned U.S. trust fund contributions and grants to the FSM and RMI through fiscal year 2023.) The amended compacts identify the additional 20 years of assistance— primarily in the form of annual sector grants and contributions to the compact trust fund for each country—as intended to assist the FSM and RMI governments in their efforts to promote the economic advancement and budgetary self-reliance of their people. The amended compacts and their subsidiary agreements, along with the countries’ development plans, target the grant assistance to six sectors—education, health, public infrastructure, the environment, public sector capacity building, and private sector development—prioritizing two sectors, education and health. Interior projects that it will provide the FSM $2.1 billion under the compact, while economic assistance and trust fund contributions to the RMI will total $1 billion in fiscal years 2019 through 2023. The amended compacts also provided for a joint economic management committee for the FSM and a joint management and financial accountability committee for the RMI to promote the effective use of compact funding. In practice, the committees allocate grants and attach terms and conditions to grant awards through resolutions, which the committees discuss and vote on at their meetings. OIA has responsibility for administration and oversight of the FSM and RMI compact grants. The public law implementing the amended compacts required the President to submit annual reports to Congress regarding the FSM and RMI, a reporting requirement that has been delegated to the Secretary of the Interior. Every 5 years, these annual reports are to include additional information, including findings and recommendations, pertaining to reviews that are required by law to be conducted at 5-year intervals. The compacts provide for FAS citizens to enter and reside indefinitely in the United States, including its territories, without regard to the Immigration and Nationality Act’s visa and labor certification requirements. Since the compacts went into effect, thousands of migrants from the FAS have established residence in U.S. areas, particularly in Guam, Hawaii, and the CNMI. In the 2003 amended compacts’ enabling legislation, Congress authorized and appropriated $30 million annually for 20 years for grants to Guam, Hawaii, the CNMI, and American Samoa, which it deemed affected jurisdictions, and authorized additional appropriations. The $30 million annual appropriation is to aid in defraying costs incurred by these jurisdictions as a result of increased demand for health, educational, social, or public safety services, or for infrastructure related to such services, due to the residence of compact migrants in their jurisdiction. Congress directed Interior to divide the $30 million compact impact grants among the affected jurisdictions in proportion to the most recent enumeration of those compact migrants residing in each jurisdiction. The U.S. Bureau of the Census (Census) conducted these enumerations in 2003, 2008, and 2013. If enacted, S. 2610 would approve, provide funding for, and make modifications to the September 2010 agreement between the governments of the United States and Palau regarding their compact. S. 2610 would not greatly alter the total U.S. assistance to Palau for fiscal years 2011 through 2024 specified in the 2010 agreement. However, S. 2610 would make changes to the provision of assistance outlined in the agreement in line with the reduction in U.S. assistance in fiscal years 2011 through 2016 from that planned in the 2010 agreement. The annual trust fund contributions and withdrawal conditions that S. 2610 details would improve the fund’s prospects for sustaining scheduled payments through fiscal year 2044. Under S. 2610, U.S. assistance to Palau would total about $216 million— approximately equal to the amount specified in the 2010 agreement—for fiscal years 2011 to 2024. However, after 2016, larger amounts of assistance would be provided under S. 2610 than the annual amounts scheduled under the 2010 agreement. Under the 2010 agreement, which has not been implemented, annual U.S. assistance to Palau would have declined over 14 years from roughly $28 million in 2011 to $2 million in 2024. The 2010 agreement includes the following: Direct economic assistance ($107.5 million). The 2010 agreement would provide direct economic assistance—budgetary support for Palau government operations and specific needs such as administration of justice and public safety, health, and education—of $13 million in 2011, declining to $2 million by 2023. The 2010 agreement also calls for the U.S. and Palau governments to establish a five-member Advisory Group to provide annual recommendations and timelines for economic, financial, and management reforms. The Advisory Group must report on Palau’s progress in implementing these or other reforms, prior to annual U.S.–Palau economic consultations. These consultations are to review Palau’s progress in achieving reforms such as improving fiscal management, reducing the public sector workforce and salaries, reducing government subsidization of utilities, and implementing tax reform. If the U.S. government determines that Palau has not made significant progress in implementing meaningful reforms, direct assistance payments may be delayed until the U.S. government determines that Palau has made sufficient progress. Infrastructure projects ($40 million). Under the 2010 agreement, the U.S. government would provide U.S. infrastructure project grants to Palau for mutually agreed infrastructure projects—$8 million in 2011 through 2013, $6 million in 2014, and $5 million in both 2015 and 2016. The 2010 agreement requires Palau to provide a detailed project budget and certified scope of work for any projects receiving these funds. Infrastructure maintenance fund ($28 million). The 2010 agreement stipulates that the United States make contributions to a fund to be used for maintenance of U.S.-financed major capital improvement projects, including the Compact Road and Airai International Airport. From 2011 through 2024, the U.S. government would contribute $2 million annually, and the Palau government would contribute $600,000 annually to the fund. Fiscal consolidation fund ($10 million). The 2010 agreement states that the United States would provide grants of $5 million each in 2011 and 2012, respectively, to help the Palau government reduce its debts. Unless agreed to in writing by the U.S. government, these grants cannot be used to pay any entity owned or controlled by a member of the government or his or her family, or any entity from which a member of the government derives income. U.S. creditors must receive priority, and the government of Palau must report quarterly on the use of the grants until they are expended. Trust fund ($30.25 million). The 2010 agreement provides for the United States to contribute $30.25 million to the fund from 2013 through 2023. The government of Palau would reduce its previously scheduled withdrawals from the fund by $89 million. From 2024 through 2044, Palau can withdraw up to $15 million annually, as originally scheduled. Moneys from the trust fund account cannot be spent on state block grants, operations of the office of the President of Palau, the Olibiil Era Kelulau (Palau national congress), or the Palau judiciary. Palau must use $15 million of the combined total of the trust fund disbursements and direct economic assistance exclusively for education, health, and the administration of justice and public safety. If enacted, S. 2610 would increase the total annual assistance to Palau in fiscal years 2017 through 2024 over that which was scheduled in the 2010 agreement. This increase would be in line with the lower than scheduled amount of annual U.S. assistance that has been provided to Palau since 2011. Specifically, Congress has not passed legislation to approve the agreement, and Interior has provided Palau with a total of $78.88 million in direct economic assistance from annual appropriations— $13.147 million in each fiscal year from 2011 through 2016. The amount provided was approximately $67 million less than the amount outlined for those years in the 2010 agreement, and it included no contributions to the Palau trust fund. S. 2610 outlines changes in the schedule for contributing approximately $137 million with larger total contributions in fiscal years 2017 through 2024, which would amount to approximately the same total assistance specified in the 2010 agreement, $216 million. S. 2610 would make the following changes to the contribution schedule: Rescheduling U.S. contributions to Palau’s trust fund, with a $20 million contribution in fiscal year 2017, $2 million annually through fiscal year 2022, and $250,000 in fiscal year 2023. Rescheduling U.S. contributions to Palau’s infrastructure maintenance fund and fiscal consolidation fund, infrastructure project grants, and direct economic assistance. Figure 1 contrasts the scheduled annual assistance to Palau under the 2010 agreement with the contribution schedule under S. 2610. (See app. III for additional details on the schedule of U.S. assistance to Palau in the 2010 agreement and as modified in the provisions of S. 2610.) S. 2610 would also place conditions on the provision of assistance to Palau. Under the bill, if Palau withdraws more than $5 million from the trust fund in fiscal year 2016 or more than $8 million in fiscal year 2017, additional assistance would be withheld until Palau reimbursed the trust fund for the amounts that exceed the $5 million for fiscal year 2016 or the $8 million for fiscal year 2017. S. 2610 would not otherwise alter the withdrawal schedule outlined in the 2010 agreement. In the 2010 agreement, Palau agreed to a maximum withdrawal of $5 million annually in fiscal years 2011 through 2013, with the maximum subsequently increasing in increments through fiscal year 2023 to $13 million. Under the 2010 agreement, Palau agreed to withdraw up to $6.75 million in fiscal year 2016; under S. 2610, Palau would be able to withdraw up to $5 million in fiscal year 2016 without having assistance withheld. Furthermore, Palau did not commit to a withdrawal schedule beyond 2023 in the 2010 agreement. However, the compact details an annual distribution goal of $15 million for 2024 through 2044 from the trust fund. The contributions to, and conditions on withdrawals from, Palau’s compact trust fund that S. 2610 outlines would improve the fund’s prospects for sustaining payments beyond fiscal year 2044. At the end of fiscal year 2015, the trust fund had a balance of nearly $184 million. With or without the contributions and conditions that S. 2610 would provide, the trust fund would be sustained through fiscal year 2044 if it maintains the 7.6 percent compounded annual rate of return it earned from inception through fiscal year 2015. However, given this historical rate of return, the account balance at the end of fiscal year 2044 would be dramatically lower without the contributions and conditions outlined in S. 2610—about $32 million—than it would be with them—about $521 million. The balances with and without these contributions equal $18 million and $292 million, respectively, in 2015 inflation-adjusted dollars. Figure 2 compares the fund balance at the historical rate of return with and without the changes outlined in S. 2610. In addition, with the changes in S. 2610, Palau’s trust fund would be able to sustain scheduled payments through 2044 given varying rates of return in fiscal years 2015 through 2044. With its historical 7.6 percent annual compounded return, Palau’s trust fund would sustain its annual withdrawal schedule and continue to grow beyond 2044, with a balance of $521 million at the end of fiscal year 2044. (The 2044 balance would be $292 million in 2015 inflation-adjusted dollars.) With at least a 6.3 percent annual compounded rate of return, Palau’s trust fund would sustain its annual withdrawal schedule, with a balance of $245 million or more at the end of fiscal year 2044. (The 2044 balance would be $137 million in 2015 inflation-adjusted dollars.) With a 4.4 percent annual compounded return, Palau’s trust fund would sustain its annual withdrawal schedule through 2044, with a balance of $0 at the end of fiscal year 2044. Figure 3 shows the projected trust fund balances with these varying assumed rates of return. As we have previously reported, in implementing their amended compacts with the United States, the FSM and RMI have faced a number of critical challenges that could affect their ability to achieve the compacts’ long-term development goals. Both countries have historically had limited prospects for achieving economic growth. Moreover, compact implementation by the FSM, RMI, and U.S. governments has displayed weaknesses that have affected their ability to allocate resources appropriately as well as provide accountability for, and oversight of, the use of compact grants, which are scheduled to end in 2023. We previously reported that the FSM’s and RMI’s economies were largely dependent on government spending of foreign assistance, including U.S. assistance under the amended compacts. Because of the scheduled annual decrements of compact grant funding, annual grant assistance to the FSM and RMI will diminish over the funding period. In addition, neither country had made significant progress in implementing reforms needed to improve tax income or increase private sector investment opportunities. Moreover, tourism and fishing—private sector industries that both countries have identified as having growth potential—faced significant constraints, such as geographic isolation and lack of tourism infrastructure. In 2011, Interior’s annual report to Congress regarding the FSM and RMI noted that the FSM faced numerous challenges to private sector economic growth and suggested that a consequence of declining U.S. grant assistance could be a decline in living standards or migration to the United States. At that time, Interior found that economic prospects for the RMI remained uncertain, although the RMI had experienced growth in fisheries and tourism. Interior expected the continuation of migration from the RMI to the United States. We reported in 2007 that uncertainty existed regarding the sustainability of the FSM’s and RMI’s compact trust funds as sources of revenue after the amended compacts end. We noted that the countries’ compact trust funds’ balances in 2023 could vary widely owing to market volatility and choice of investment strategy and that, as a result, the compact trust funds might be unable to generate disbursements in some years, affecting the governments’ ability to provide services after U.S. contributions to the trust funds end. More recent analyses of the FSM and RMI trust funds have highlighted the challenge of ensuring trust fund disbursements and proposed technical revisions to trust fund procedures. In 2015, the Asian Development Bank (ADB) projected that the probability of FSM and RMI trust funds’ maintaining their value through 2050 was 22 and 49 percent, respectively. The ADB projects significant fluctuations in FSM and RMI annual drawdowns and proposes revised trust fund withdrawal rules. Moreover, 2015 economic reviews of the FSM and RMI compacts funded by Interior have projected that both trust funds will be underfunded and distribution shortfalls will be frequent and have recommended several changes to the distribution mechanism. In its September 2012 comments on the U.S. government’s first 5-year review of the amended compact, the RMI government made specific recommendations to improve compact performance, including technical revisions to trust fund procedures. During the amended compacts’ first 10 years, the FSM and RMI joint management and accountability committees directed the majority of compact grant assistance to the education and health sectors, which the compact agreements prioritized. As we previously reported, weaknesses in FSM, RMI, and U.S. implementation of the compacts have limited the governments’ ability to ensure the effective use of grant funds. Lack of reliable performance data. Ongoing problems with the reliability of data on grant performance in the education and health sectors have prevented both countries from demonstrating and assessing progress toward compact goals for these sectors and from using the data to set priorities and allocate resources to improve performance. Challenges to ensuring accountability for compact grant funding. The FSM’s and RMI’s single audits for fiscal years 2006 through 2011 indicated challenges to ensuring accountability of compact and noncompact U.S. funds in the FSM and RMI. For example, these governments’ single audits showed repeat findings and persistent problems in noncompliance with U.S. program requirements, such as accounting for equipment. For this hearing, we have updated our prior analysis of audit reports and have found that accountability remains a concern. For example, while the RMI met the single audit reporting deadline for fiscal years 2006 through 2010, it submitted the required reports for fiscal years 2011 through 2014 after the deadline. Moreover, the 2014 reports for both countries identified several material weaknesses, such as an inability to account properly for equipment. Limited oversight of compact grants. OIA’s oversight of grants under the amended compacts has been limited by staffing shortages. As we have previously reported, OIA officials noted that budget constraints, as well as decisions to use available funding for other hiring priorities, were among factors that prevented OIA from hiring staff that it had projected as necessary to ensure effective oversight for the amended compacts. These staffing shortages have affected OIA’s ability to ensure that compact funds are used efficiently and effectively. According to FSM and RMI officials, staffing constraints, as well as a lack of authority to enforce compact requirements, hampered oversight by the FSM and RMI offices responsible for compact implementation. The population of FAS migrants in U.S. areas has continued to grow. We have previously reported that, while the majority of compact migrants live in three affected jurisdictions—Hawaii, Guam, and the CNMI—migrants are also present in several other U.S. states. The three affected jurisdictions have reported more than $2 billion in costs associated with providing education, health, and social services to compact migrants and have called for additional funding and changes in law to address compact migrant cost impacts. Since the signing of the Compacts of Free Association, thousands of FAS citizens have migrated to U.S. areas. According to Census enumerations of migrants in three affected jurisdictions—Guam, Hawaii, and the CNMI—the total number of compact migrants in those jurisdictions increased from about 21,000, estimated in the 2003 enumeration, to about 35,000, estimated in the 2013 enumeration (see fig. 4). In 2011, Census estimated that roughly 56,000 compact migrants—nearly a quarter of all FAS citizens—were living in U.S. areas in 2005 to 2009. About 58 percent of compact migrants lived in Hawaii, Guam, and the CNMI at that time. Nine mainland U.S. states—California, Washington, Oregon, Utah, Oklahoma, Florida, Arkansas, Missouri, and Arizona— each had an estimated compact migrant population of more than 1,000. (See app. IV for further information about the estimated compact migrant populations.) Approximately 68 percent of compact migrants were from the FSM, 23 percent were from the RMI, and 9 percent were from Palau. In fiscal years 2004 through 2016, affected jurisdictions received approximately $409 million in compact impact grants to aid in defraying their costs due to the residence of compact migrants. In fiscal years 2004 through 2016, Interior distributed a portion of the $30 million annual appropriation that was authorized and appropriated in the amended compacts’ enabling legislation to each affected jurisdiction according to the size of its compact migrant population. Since fiscal year 2012, as authorized by the amended compacts’ enabling legislation, Interior has also provided compact impact grants to affected jurisdictions from annual appropriations, which it has also divided according to the size of their migrant populations. Table 2 shows the compact impact grants that Guam, Hawaii, and the CNMI received in fiscal years 2004 through 2016. The affected jurisdictions have continued to report to Interior that their cost impacts from compact migrants greatly exceed the amount of the compact impact grants. In 2003 through 2014, Guam reported $825 million in costs, Hawaii reported $1.2 billion, and the CNMI reported $89 million. (Fig. 5 shows the affected jurisdictions’ reported annual costs of services to compact migrants.) These affected jurisdictions reported costs for the services identified in the amended compacts’ enabling legislation: educational, health, public safety, and social services. Education costs accounted for the largest share of reported expenses in all three jurisdictions, and health care costs accounted for the second largest share. Officials in Guam and Hawaii also cited compact migrants’ limited eligibility for a number of federal programs, particularly Medicaid, as a key contributor to the cost of compact migration borne by the affected jurisdictions. We have previously found that the three affected jurisdictions’ cost estimates contained a number of limitations with regard to accuracy, adequate documentation, and comprehensiveness. These limitations affect the reported costs’ credibility and prevent a precise calculation of total compact cost impact on the affected jurisdictions. For example, some jurisdictions did not accurately define compact migrants according to the criteria in the amended compacts’ enabling legislation, account for federal funding that supplemented local expenditures, or include revenue received from compact migrants. Many local government agencies did not include capital costs in their impact reporting, which may have led to an understatement of costs. We recommended that the Secretary of the Interior disseminate guidelines to the affected jurisdictions that adequately address concepts essential to producing reliable impact estimates and that the Secretary call for their use in developing compact impact reports. In a February 2015 report to Congress on the Governors’ compact impact reports, Interior noted that it had concerns about the uniformity of compact impact reports, including the use of different data gathering and formats by Guam and Hawaii. Interior reiterated those concerns in its January 2016 report to Congress. While Interior developed a draft of compact impact reporting guidelines in 2014, it has not disseminated them to affected jurisdictions. In March 2016, Interior stated that OIA, in consultation with the leaders from the affected jurisdictions, would develop guidelines for measuring compact impact and that the guidelines would be completed in December 2016. Since we reported on compact migration impacts in 2001, the three affected jurisdictions have continued to express concerns that they do not receive adequate compensation for the growing cost of providing government services to compact migrants. For example, in his 2015 State of the Island address, the Governor of Guam noted that compact impact reimbursement had been a topic of disagreement for decades and criticized “the federal government’s inability to live up to its promise” to help provide services to the compact migrant population. Similarly, in Hawaii’s August 2015 cost impact report to Interior, the Governor of Hawaii noted that Hawaii had consistently advocated for an increase in compact impact assistance to the affected jurisdictions and that providing for direct federal assistance in programs such as Medicaid, Temporary Assistance for Needy Families (TANF), Supplemental Nutrition Assistance Program (SNAP), and other means-tested public assistance not currently available to compact migrants would significantly reduce Hawaii’s impact costs. The Governor further suggested that the governments of the FAS be encouraged to utilize the financial support they receive directly from the United States to contract services in the United States for their citizens who choose to reside in the United States. In our 2011 report, we recommended that the Secretary of the Interior work with the U.S.–FSM and U.S.–RMI joint management committees to consider uses of sector grants that would address the concerns of FSM and Marshallese migrants and the affected jurisdictions. While Interior took initial steps to implement this recommendation and discuss compact impact at the joint management committee meetings, the discussions have not been continued. In March 2016, Interior OIA stated that the concerns of FSM and RMI migrants and affected jurisdictions will be discussed at future meetings of the joint management committees. In a January 2016 letter accompanying its Report to the Congress: 2015 Compact Impact Analysis, OIA stated that increased oversight and accountability are needed in the use of compact sector grants by the FAS—particularly for infrastructure grants for health and education—and that improving the quality of life for FAS citizens may help address the migration from the FAS to the United States. Chairwoman Murkowski, Ranking Member Cantwell, and Members of the Committee, this concludes my statement. I would be pleased to respond to any questions you may have. If you or your staff have any questions about this testimony, please contact David Gootnick, Director, International Affairs and Trade at (202) 512-3149 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are Emil Friberg (Assistant Director), Ashley Alley, Ming Chen, David Dayton, Brian Hackney, Julie Hirshen, Jeff Isaacs, Reid Lowe, Grace Lui, Mary Moutsos, Michael McKemey, Michael Simon, Jena Sinkfield, and Ozzy Trevino. Key provisions of the compact and its subsidiary agreements address the sovereignty of Palau, types and amounts of U.S. assistance, security and defense authorities, and periodic reviews of compact terms. Table 3 summarizes key provisions of the Palau compact and related subsidiary agreements. Figure 6 shows the annually decreasing U.S. grant funding to the Federated States of Micronesia (FSM) and Republic of the Marshall Islands (RMI) and increasing U.S. contributions to the FSM’s and the RMI’s compact trust funds in fiscal years 2004 through 2023. Senate Bill 2610 (S. 2610) would modify the schedule of U.S. assistance to Palau specified in the 2010 agreement between the U.S. and Palau governments, which has not been implemented. Table 4 shows the assistance schedule for fiscal years 2011 through 2024 outlined in the 2010 agreement. Table 5 shows U.S. assistance provided to Palau through discretionary appropriations in fiscal years 2011 through 2016 and the assistance schedule proposed in S. 2610. Compact migrants reside throughout U.S. states and territories. In 2011, we reported that 57.6 percent of all compact migrants lived in affected jurisdictions: 32.5 percent in Guam, 21.4 percent in Hawaii, and 3.7 percent in the Commonwealth of the Northern Mariana Islands (CNMI). According to American Community Survey data, nine mainland states had estimated compact migrant populations of more than 1,000 in 2005 through 2009 (see fig. 7). According to these estimates, the Federated States of Micronesia produced the highest number of migrants but migrants from the Republic of the Marshall Islands predominated in Arizona, Arkansas, California, and Washington.
U.S. compacts with the FSM and the RMI entered into force in 1986 and were amended in 2003. A compact with Palau entered into force in 1994. Legislation pending before the Senate would approve, provide funding for, and make modifications to a 2010 agreement between the U.S. and Palau governments regarding their compact. Under the compacts, the United States provides each country with, among other things, economic assistance—including grants and contributions to a trust fund; access to certain federal services and programs; and permission for citizens of the three countries to migrate to the United States and its territories without regard to visa and labor certification requirements. Guam, Hawaii, the Commonwealth of the Northern Mariana Islands, and American Samoa, which are designated in law as jurisdictions affected by compact migration, receive grants to aid in defraying the cost of services to migrants. This testimony examines (1) the potential impact of the proposed legislation approving the 2010 Palau agreement, (2) challenges affecting implementation of the FSM and RMI compacts, and (3) migration from the FSM, RMI, and Palau and its impacts on U.S. areas. For this statement, GAO summarized previous reports issued in 2007 through 2013 and incorporated updated information from Palau, the Department of the Interior, and affected jurisdictions. GAO is not making any new recommendations in this testimony. GAO has made recommendations in its prior reports, some of which have not yet been addressed. If enacted, Senate Bill 2610 (S. 2610) would change the schedule for U.S. assistance to the Republic of Palau and improve prospects for Palau's compact trust fund. S. 2610 would approve a 2010 agreement between the U.S. and Palau governments and provide annual assistance to Palau through 2024. Congress has not approved legislation to implement the 2010 agreement, which scheduled $216 million in U.S. assistance for fiscal years 2011 through 2024. Since 2011, the United States has provided $79 million in economic assistance to Palau through annual appropriations. However, this amount was less than anticipated under the agreement and has not included trust fund contributions. S. 2610 would modify the agreement schedule to provide the remaining $137 million in fiscal years 2017 through 2024, including a $20 million trust fund contribution in 2017 and smaller contributions in later years (see fig.). U.S. Assistance to Palau Provided in Fiscal Years 2011-2016 and Proposed by Senate Bill 2610 for Fiscal Years 2017-2024 The Federated States of Micronesia (FSM) and Republic of the Marshall Islands (RMI) face challenges to achieving the compact goals of economic growth and self-sufficiency. GAO previously found that neither country has made significant progress on reforms and compact implementation has been characterized by unreliable performance data and by accountability and oversight challenges. GAO has previously reported on the growth of migrant populations from Palau, the FSM, and RMI in U.S. areas as well as the reported impacts of these compact migrants. In Guam, Hawaii, and the Commonwealth of the Northern Mariana Islands—areas Congress has deemed affected jurisdictions—compact migrants increased from about 21,000 in 2003 to about 35,000 in 2013. In fiscal years 2004 through 2016, the Department of the Interior provided approximately $409 million to affected jurisdictions to aid in defraying costs, such as for education and health services, attributable to compact migrants. In contrast, affected jurisdictions estimated costs of $2.1 billion for these services in 2003 through 2014. However, GAO has noted that these estimates have limitations related to accuracy, documentation, and comprehensiveness.
The Davis-Bacon Act requires workers on federal construction projects valued in excess of $2,000 to be paid, at a minimum, wages and fringe benefits that the Secretary of Labor determines to be prevailing for corresponding classes of workers in the locality where the contract is to be performed. The act covers every contract to which the United States or the District of Columbia is a party for construction, alteration, or repair of public buildings or public works. Labor’s Wage and Hour Division (WHD), within Labor’s Employment Standards Administration (ESA), has responsibility for administering the Davis-Bacon Act. Approximately 50 staff in the Washington, D.C., headquarters and in six regional offices are involved in the wage determination process. Two other Labor offices are sometimes involved in the administration of Davis-Bacon: Labor’s Administrative Review Board hears appeals of prevailing wage determinations, and the Office of the Solicitor provides legal advice and assistance to Labor personnel relative to the act and represents WHD in Davis-Bacon wage determination cases before the Administrative Review Board. also improved Labor’s ability to administer the wage determination process. Despite these changes, however, we reported in 1994 that data verification problems still existed. In setting prevailing wages, Labor’s task is to determine and issue prevailing wage rates in a wide range of job classifications in each of four types of construction (building, residential, heavy, and highway) in more than 3,000 counties or groups of counties. It also needs to update these wage determinations frequently enough that they continue to represent the prevailing wages. Labor’s process for determining the wage rates is based primarily on a survey of the wages and fringe benefits paid to workers in similar job classifications on comparable construction projects in the particular area. This information is submitted voluntarily by employers and third parties. Labor encourages the submission of wage information from all employers and third parties, including employee unions and industry associations that are not directly involved with the surveyed projects. Although an individual wage survey typically covers only one kind of construction, most surveys gather information on projects in more than one county. In fiscal year 1995, Labor completed 104 survey efforts resulting in wage determinations for over 400 counties. The wage determination process consists of four basic stages: planning and scheduling surveys, conducting the surveys, clarifying and analyzing respondents’ wage data, and issuing the wage determinations. In addition, any employer or interested party who wishes to contest or appeal Labor’s final wage determination can do so. counties for which the wage determination should be conducted and determines what construction projects will be surveyed. The work of conducting the surveys and clarifying and analyzing the data is done by about 30 staff distributed among six regional offices. The survey is distributed to the participant population, which includes the general contractor for each construction project identified as comparable and within the survey’s geographic area. In surveying the general contractors, Labor requests information on subcontractors to solicit their participation. Labor also surveys interested third parties, such as local unions and construction industry associations that are located or active in the survey area. Once the data submissions are returned, the analysts review and analyze the returned survey forms. They follow up with the employer or third parties to clarify any information that seems inaccurate or confusing. The analysts then use this information to create computer-generated recommended prevailing wages for key construction job classifications. The recommended prevailing wages are reviewed and approved by Labor’s National Office in Washington, D.C. Labor publishes the final wage determinations in printed reports and on its electronic bulletin board. The opportunity to appeal a final wage determination is available to any interested party at any time after the determination is issued. For example, appeals could come from contractors, contractor associations, construction workers, labor unions, or federal, state, or local agencies. Appeals may take the form of informal inquiries resolved at the regional office level or formal requests for reconsideration that are reviewed at the regional office or the National Office and may be appealed to the Administrative Review Board for adjudication. Labor’s wage determination process contains weaknesses that could permit the use of fraudulent or inaccurate data for setting prevailing wage rates. These weaknesses include limitations in the degree to which Labor verifies the accuracy of the survey data it receives, limited computer capability to review wage data before calculating prevailing wage rates, and an appeals process that may not be well publicized to make it accessible to all interested parties. Wage determinations based on erroneous data could result in wages and fringe benefits paid to workers that are higher or lower than the actual prevailing rates. Labor’s regional staff rely primarily on telephone responses from employers or third parties to verify the information received on Labor’s WD-10 wage reporting forms. Regional office staff told us that most of the verification—clarifications concerning accuracy, appropriateness, or inclusion—was done by telephone. Labor’s procedures also do not require and Labor staff rarely request supporting documentation—for example, payroll records—to supplement the information on the forms submitted by employers. Labor officials and staff told us that if an employer insists that the wages reported are accurate, the wage analyst generally accepts that statement. It is because of resource constraints, according to Labor headquarters officials, that verification is limited to telephone contacts without on-site inspections or reviews of employer payroll records to verify wage survey data. In recent years, Labor has reduced the number of staff allocated to Davis-Bacon wage-setting activities. For example, the number of staff in Labor’s regional offices assigned to the Davis-Bacon wage determination process—who have primary responsibility for the wage survey process—decreased from a total of 36 staff in fiscal year 1992 to 27 staff in fiscal year 1995. Labor officials in one region also told us that staff had only received two work-related training courses in the last 6 years. Labor’s regional staff told us that the staff decline has challenged their ability to collect and review wage survey data for accuracy and consistency. Labor’s administration of the Davis-Bacon wage determination process is also hampered by limited computer capabilities. Labor officials reported a lack of both computer software and hardware that could assist wage analysts in their reviews. Instead, they said that analysts must depend on past experience and eyeballing the wage data for accuracy and consistency. For example, Labor offices do not have computer software that could detect grossly inaccurate data reported in Labor’s surveys. Regional staff reported only one computer edit feature in the current system that could eliminate duplicate entry of data received in the wage surveys. As a result, several review functions that could be performed by computers are conducted by visual reviews by one or more wage analysts or supervisory wage analysts in Labor’s regional offices. capabilities, Labor staff told us that they are unable to store historical data on prior wage determinations that would allow wage analysts to compare current with prior recommendations for wage determinations in a given locality. These limitations could be significant given the large number of survey forms received and the frequency of errors on the WD-10 reporting forms. In fiscal year 1995, Labor received wage data on about 75,000 WD-10 wage reporting forms; these were from over 37,000 employers and third parties, some of whom provided information on multiple construction projects. Labor staff reported that submissions with some form of data error were quite common. The frequency of errors could be caused in part by employer confusion in completing the wage reporting forms. Depending on the employer’s size and level of automation, completing the WD-10 reporting forms could be somewhat difficult and time consuming. For example, employers must conduct so-called peak week calculations where they must not only compute the hourly wages paid to each worker who was employed on the particular project in a certain job classification but also do so for the time period when the most workers were employed in each particular job classification. We were told that this can be especially difficult for many smaller, nonunion employers. Although Labor staff reported that wage surveys with data errors are fairly common, agency officials believe that it is very unlikely that erroneous wage data went undetected and were used in the prevailing wage determination. They said that a key responsibility of Labor’s wage analysts is to closely scrutinize the WD-10 wage reporting forms and contact employers as necessary for clarification. Labor officials contended that, over time, this interaction with employers and third parties permitted Labor staff to develop considerable knowledge of and expertise in the construction industry in their geographic areas and to easily detect wage survey data that are inaccurate, incomplete, or inconsistent. Labor’s appeals process could provide an important safeguard against reliance on inaccurate data in that it allows any interested party to question the validity of the determinations. But our review suggests that this mechanism is not understood well enough to serve its purpose. most inquiries on its wage determinations are informal and are generally resolved quickly over the telephone at the regional offices. If an informal inquiry is not resolved to the satisfaction of the interested party, he or she may submit a formal request for reconsideration to either the regional or National Office. A formal request for reconsideration of a wage determination must be in writing and accompanied by a full statement of the interested party’s views and any supporting wage data or other pertinent information. A successful request for reconsideration typically results in Labor modifying an existing determination or conducting a new wage survey. An interested party may appeal an unsuccessful request—that is, one in which he or she is dissatisfied with the decision of the WHD Administrator—to Labor’s Administrative Review Board for adjudication. Labor officials said it is extremely rare for anyone to appeal formal requests for reconsideration of a determination to the Board, reporting that there had been only one such case in the last 5 years. The infrequency of formal appeals to the Board can be interpreted in more than one way. Labor officials interpreted this record to mean that there is little question about the accuracy and fairness of the prevailing wage determinations issued. Alternatively, this could reflect interested parties’ lack of awareness of their rights and the difficulty they face in collecting the evidence necessary to sustain a case. Representatives of construction unions and industry trade associations told us that employers were generally unaware of their rights to appeal Labor’s final wage determinations. Officials with a state Labor Department also told us that, even if an interested party wanted to appeal a wage determination to the National Office and the Administrative Review Board, the effort it takes to independently verify wage data submissions could discourage such an action. They reported that it took a state investigation team a full month to gather information to support the need for Labor to reconsider some wage determinations—and that involved investigating and verifying the information for only three construction projects. A private employer or organization wishing to appeal a determination might experience similar difficulties. Wage determinations based on erroneous data could result in workers being paid higher or lower wages and fringe benefits than those prevailing on federal construction projects. Higher wages and fringe benefits would lead to increased government construction costs. On the other hand, lower wages and fringe benefits would result in construction workers being paid less than is required by law. Although they considered it unlikely, Labor officials acknowledged that, in general, there could be an incentive for third parties, particularly union contractors, to report higher wages than those being paid on a particular construction project. By reporting higher wages, they could influence the prevailing wages in a local area toward the typically higher union rate. The use of inaccurate data could also lead to lower wages for construction workers on federal projects than would otherwise be prevailing. Labor officials acknowledged that an employer in a largely nonunion area who had been paying lower than average wages would have an incentive to “chisel” or report wages and fringe benefits levels somewhat lower than what he or she was actually paying, in an attempt to lower the Davis-Bacon rate. However, officials also said that it is much more likely for some employers to report data selectively in an effort to lower the prevailing wage rate. For example, a contractor may only submit data on those projects where the wages paid were relatively low, ignoring projects where a somewhat higher wage was paid. In addition, the wages required under the Davis-Bacon Act have implications for construction projects other than those specifically covered by the act. Industry association members and officials told us that in several parts of the country, employers, especially nonunion contractors, paid wages on their private projects below the prevailing wage levels specified by the Davis-Bacon Act in their areas. These officials told us that this differential sometimes proved problematic for contractors in retaining their skilled labor force. An official of an employer association told us, for example, that an employer who successfully bid on a Davis-Bacon contract but who typically paid wages below the prevailing rate would be required to pay the workers employed on the new project at the higher Davis-Bacon wage rates. Depending on the local labor market conditions, when the project was completed, these workers typically received their pre-Davis-Bacon, lower wages and fringe benefits on any future work. In such cases, some employees became disgruntled, believing that they were being cheated, and may have suffered lower morale that sometimes led to increased staff turnover. Depending on local labor market conditions, if the employer did not bid on the Davis-Bacon project, he or she could still be affected if the employer’s skilled workers quit to search for work on the new, higher wage federally funded project. Labor has acknowledged the weaknesses of its current wage determination process and it has proposed both short- and long-term initiatives to improve the accuracy of the data used to make prevailing wage determinations. One recent change improves the verification process for data submitted by third parties. In August 1995, Labor began requiring its wage analysts to conduct telephone verifications with the employer on all third-party data that appear to be inaccurate or confusing. In addition, the new policy requires analysts to verify with the employers at least a 10-percent sample of third-party data that appear to be accurate. Labor has also proposed a change that would specifically inform survey respondents of the possible serious consequences of providing false data, since it is a crime under federal law to knowingly submit false data to the government or use the U.S. mail for fraudulent purposes. In February 1996, Labor solicited comments in the Federal Register on its proposal to place a statement on the WD-10 survey reporting form that respondents could be prosecuted if they willfully falsify data in the Davis-Bacon wage surveys. The comment period for this proposal ended in May 1996, and the proposed regulation has now been sent to the Office of Management and Budget. Labor has also proposed a long-term strategy to review the entire Davis-Bacon wage determination process. In late 1995, Labor established an ongoing task group to identify various strategies for improving the process it uses to determine prevailing wages. These continuing discussions have led to the identification of various weaknesses in the wage determination process and steps Labor might take to address them. In its fiscal year 1997 budget request, Labor asked for about $4 million to develop, evaluate, and implement alternative reliable methodologies or procedures that will yield accurate and timely wage determinations at reasonable cost. Approaches that it is considering include alternatives such as use of other existing databases to extrapolate wage data instead of collecting its own survey data. Labor anticipates making a general decision on the overall direction of its strategy for improving its wage determination process by late 1996. inaccurate data. Therefore, we recommended that, while it continues its more long-term evaluation and improvement of the overall wage determination process, it move ahead immediately to improve its verification of wage data submitted by employers. We also recommended that it make the appeals process a more effective internal control to guard against the use of fraudulent or inaccurate data. Specifically, we recommended that Labor improve the accessibility of the appeals process by informing employers, unions, and other interested parties about the process—about their right to request information and about procedures for initiating an appeal. In its response to our draft report, Labor agreed to implement these recommendations. Our review confirmed that vulnerabilities exist in Labor’s current wage determination process that could result in wage determinations based on fraudulent or otherwise inaccurate data. Although we did not determine the extent to which Labor is using inaccurate data in its wage calculations nor the consequences, in terms of wages paid, of such use, we believe that these vulnerabilities are serious and warrant correction. We believe that the process changes we recommended address those vulnerabilities and, if implemented in a timely manner, could increase confidence that the wage rates are based on accurate data. Specifically, Labor needs to move ahead immediately to improve its verification of wage data submitted by employers. We recognize, however, that the wage determinations could be flawed for other reasons. For example, other problems with the survey design and implementation, such as the identification of projects to survey or the response rates obtained, could affect the validity of the determinations. In addition, untimely updating of the wage rates decreases confidence in their appropriateness. Nevertheless, using only accurate data in the wage determination process is, in our view, a minimum requirement for ultimately issuing appropriate wage determinations. Mr. Chairmen, that concludes my prepared statement. At this time, I will be happy to answer any questions you or other members of the Subcommittees may have. For information on this testimony, please call Charles A. Jeszeck, Assistant Director, at (202) 512-7036; or Linda W. Stokes, Evaluator-in-Charge, at (202) 512-7040. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed the vulnerabilities in the Department of Labor's prevailing wage determination process under the Davis-Bacon Act. GAO noted that: (1) Labor sets prevailing wage rates for numerous job classifications in about 3,000 geographic areas; (2) Labor's wage determination process depends on employers' and third parties' voluntary participation in a survey that reports wage and fringe benefits paid for similar jobs on comparable construction projects in a given area; (3) due to limited resources, Labor concentrates on those geographical areas most in need of wage rate revisions; (4) Labor wage determinations can be appealed by any interested party; (5) process weaknesses include limited data verification, limited computer capabilities to detect erroneous data, and the lack of awareness of the appeals process; (6) erroneous data could result in setting the wage rate too low so that construction workers are underpaid or setting the wage rate too high so that the government incurs excessive construction costs; (7) Labor initiatives to improve its rate-setting process include having employers verify certain third-party data, informing survey respondents of the serious consequences of willfully falsifying wage data, and proposing a long-term strategy to review the entire wage determination process; and (8) Labor should immediately improve its verification of employer wage data and make the appeals process more effective.
Four major sources of student aid are currently available: the Federal Family Education Loan Program (FFELP), the Pell Grant Program, the Federal Direct Loan Program, and Campus-Based Programs. Before the recent 5-year phase-in of the direct loan program, FFELP and the Pell Grant programs were the largest sources of federally financed educational assistance. FFELP provides loans through private lenders; these loans are guaranteed against default by about 36 guaranty agencies nationwide—state and nonprofit private agents of the federal government whose services include, among others, payment of claims on defaulted loans. The loans are ultimately insured by the federal government. The Pell program provides for grants to economically disadvantaged students. Over the years, both FFELP and the Pell Grant Program have been subject to waste, fraud, and abuse. Because of the limited risks to schools, lenders, and guaranty agencies, and the billions of dollars in available aid, the structure of FFELP created the potential for large losses, sometimes through abuse. In fiscal year 1995, for example, the federal government paid out over $2.5 billion to make good the guarantee on defaulted student loans. In our past work we found that students who had previously defaulted on student loans were nonetheless subsequently able to obtain additional loans. The Pell program has likewise experienced abuse, such as students’ receiving grants while attending two or more schools concurrently. Since the inception of the program in 1973, students have been limited to receiving Pell grants from only one school at a time. The Department’s student financial aid programs are one of 25 areas we have categorized as high risk because of vulnerability to waste, fraud, and abuse. Although progress has been made, the Department’s controls for ensuring data accuracy and management oversight remain inadequate. The Department has long recognized its significant problems with title IV data reliability. In fact, it reported this as a material weakness under the Federal Managers’ Financial Integrity Act. Plans are now underway to address this issue through a major initiative started last December to reconcile NSLDS data with data in the program-specific databases. Similarly, because of the poor quality and unreliability of financial data remaining in the Department’s systems, Education staff cannot obtain the complete, accurate data necessary for reporting on its financial position. In fact, the Department’s Office of Inspector General was unable to express an opinion on the fiscal year 1994 FFELP principal financial statements, taken as a whole, because of the unreliability of student loan data on which the Department based its expected costs to be incurred on outstanding guaranteed loans. Education received a disclaimer of audit opinion on the 1995 financial statements for the same reason. The Department’s acting chief financial officer, therefore, had to present unaudited 1996 financial statements in Education’s March 1997 annual accountability report (covering fiscal year 1996). According to this report, the audited statements—with auditor’s report—were to be available “around July 31, 1997.” NSLDS was authorized under the 1986 HEA amendments as a means of improving compliance with repayment and loan-limitation provisions, and to help ensure accurate information on student loan indebtedness and institutional lending practices. The 1992 HEA amendments required that Education integrate NSLDS with the databases of the program-specific title IV systems by January 1, 1994. In January 1993 the Department awarded a 5-year, $39-million contract to develop and maintain NSLDS. Despite the mandate of the 1992 HEA amendments—and the conclusions of studies carried out both within Education and by the Advisory Committee on Student Financial Assistance—the Department’s actions have fallen short of full integration. Education officials chose to establish NSLDS as a data repository, to receive information from the other title IV systems. Yet operating in such an environment presents complications due to the lack of uniformity in how the systems handle and store information. The lack of data standards has complicated data matching between systems. To assist in achieving integration of the Department’s title IV systems, the 1992 amendments included specific requirements for the establishment of common identifiers and the standardization of data reporting formats, including definitions of terms to permit direct comparison of data. This has still not been accomplished. Hence, the NSLDS database cannot be updated without expensive conversion workaround programs. The result is a collection of independent systems, many of which keep data that duplicate information stored in NSLDS. This lack of integration promotes an environment of reduced management efficiency, compromised system integrity, and escalating costs as new stand-alone systems are developed. While NSLDS was envisioned as the central repository for student financial aid data, it is not readily compatible with most of the other title IV systems. These various systems are operated by several different contractors and have different types of hardware, operating systems, application languages, and database management systems. Along with Education’s internal systems, thousands of schools and numerous guaranty agencies also employ disparate systems through which they send data to NSLDS. Therefore, to accept data from these other systems, NSLDS must have the necessary workarounds in place. Education and its data providers currently use over 300 computer formatting and editing programs—many of them workarounds—to bridge the gaps in this complex computing environment. These programs, however, may themselves introduce errors and that would not be necessary in a fully integrated environment. Such programs contribute to the rapidly escalating costs for the 5-year NSLDS contract—from an original $39 million estimate to about $83 million today. Department officials have acknowledged that integration is important and has not been fully achieved. They told us, however, that they had little time to consider viable alternatives in designing and implementing NSLDS because of statutory requirements and the large number of diverse organizations from which data had to be gathered. The nonstandard use of student identifiers by various title IV systems complicates tracking of students across programs, making the task cumbersome and time-consuming. Likewise, identifying institutions can be problematic because multiple identifiers are used; for instance, the same school may have different identifying numbers for each of the title IV programs in which it participates. The 1992 amendments required common institutional identifiers by July 1, 1993; as of now, the Department’s plans call for their development and implementation for the 1999-2000 academic year. Beyond simply having common identifiers, it is important that data standards be established; this is the accepted technique used to govern the conventions for identifying, naming, and formatting data. The absence of such standards usually results at best in confusion, at worst in possible misinformation leading to the improper awarding of aid. Having data standards in place means that everyone within an organization understands the exact meaning of a specific term. While each title IV system uses the format specified by NSLDS to report data, the Department permits each program to use its own data dictionary—defining terms in different ways. One example of how this disparity can affect program operations can be seen in the differences in how student enrollment status is stored in NSLDS, compared with the system that supports the Pell Grant Program. Properly determining enrollment status is important because students generally begin repaying loans following a 6-month grace period after leaving school. Because NSLDS and the Pell system report enrollment status in different formats—alpha versus numeric—and use different definitions, exact comparisons cannot be made, and queries may well produce inconsistent responses. This can lead to misinterpretations of a student’s true enrollment status. Problems such as these resulting from data inconsistencies between systems can take school officials weeks or months to resolve—if they are even detected. Over the last decade, computer-based information systems have grown dramatically; with this growth has come vastly increased complexity. As a means of handling such size and complexity, reliance on systems architectures has correspondingly increased. As discussed briefly earlier, an architecture is simply a framework or blueprint to guide and constrain the development and evolution of a collection of related systems. Used in this way, it can help significantly to avoid inconsistent system design and development decisions, and along with them the cost increases and performance shortfalls that usually result. Leading public and private organizations are today using systems architectures to guide mission-critical systems acquisition, development, and maintenance. The Congress has also recognized the importance of such architectures and their place in improving federal information systems. The Clinger-Cohen Act of 1996 requires department-level chief information officers to develop, maintain, and facilitate the implementation of integrated systems architectures. And experts in academia have likewise championed this approach. A systems architecture could significantly help Education in overcoming its continuing problems integrating NSLDS and the other title IV systems. It should also reduce expenses by obviating the need for more stand-alone systems and their requirement for workarounds, since one function of an architecture is to ensure that systems will be interoperable. Despite the importance of a systems architecture, Education officials have not devoted the time or effort necessary to develop such a blueprint. According to these officials, two factors accounting for this are the Department’s focus on responding to legislative mandates and its lack—until recently—of a chief information officer. However, the Department reports that work on an architecture has begun and that it expects completion by June 30, 1998. We have conducted a preliminary review of the technical portion of the draft architecture, and we believe that Education is underestimating what will be required to fully develop and implement a systems architecture departmentwide. Further, we are concerned that the Department has drafted the technical component before the “logical” component. The logical part should be developed first because it is derived from a strategic information systems planning process that clearly defines the organization’s mission, the business functions required to carry out that mission, and the information needed to perform those functions. The Department has a compelling need for a systems architecture that would enable the eventual integration of all title IV systems. In spite of this, however, it continues to acquire multiple stand-alone systems. Today the Department manages 9 major systems, supported by 16 separate contracts, to administer student financial aid programs. They range from legacy mainframe systems, several developed over 15 years ago, to a new client-server system. For the most part, these systems operate independently, and cannot communicate or share data with one another. They are also expensive. As I mentioned earlier, this is a costly approach to systems acquisition. Our chart, reproduced at the end of this statement, shows that Education’s information technology costs have almost tripled since fiscal year 1994. The reported cost of these systems in fiscal year 1994 was $106 million; for fiscal year 1998 it is expected to be about $317 million. Many of the systems, including NSLDS, were developed independently over time by multiple contractors responding to new functions, programs, or mandates—and not as part of a long-range, carefully considered systems-design strategy. This has evolved into a patchwork of stovepipe systems that rely heavily on contractor expertise to develop and maintain systems responsible for administering critical student financial aid information. A case in point: the Department recently awarded separate contracts to three vendors for new, stand-alone systems to service direct loans. Including the original servicer, the total cost for the four systems could be as high as $1.6 billion through fiscal year 2003. This will result in four different servicing systems for the same loan program, inviting problems that stem from a likely lack of systems interoperability. For over 2 years, the Advisory Committee on Student Financial Assistance has been a consistent voice favoring movement away from this “stovepipe” approach and toward integration. It has attributed deficiencies in the delivery system for student financial aid to the lack of a fully functional, title IV-wide recipient database that could integrate all program operations. Two years ago, a project was initiated that held the promise of reengineering current processes and developing a system that would integrate all players in the student financial aid community. Called Project EASI, for Easy Access for Students and Institutions, it has endured loose definition, a tentative start, and uncertain commitment from top management. As such, whether it can achieve real process redesign and systems integration is in doubt. In summary, the Department of Education continues its slow pace toward compliance with the 1992 HEA amendments. While we understand the difficulty of the challenges it faces, we nonetheless believe that the longer the Department waits to develop a sound architecture and integrate its systems, the more difficult and expensive that job will eventually be. Accordingly, our report recommends that the Secretary of Education direct the Department’s chief information officer to develop and enforce a Departmentwide systems architecture by June 30, 1998; that all information technology investments made after that date conform to this architecture; and that funding for all projects be predicated on such conformance, unless thorough, documented analysis supports an exception. Mr. Chairman, this concludes my statement. I would be pleased to respond to any questions you or other Members of the Subcommittee may have at this time. Pell Grant Recipient Financial Management System (PGRFMS) National Student Loan Data System (NSLDS) FFELP System (FFELPS) Central Processing System (CPS) (includes multiple data entry contracts) Student Financial Aid Information: Systems Architecture Needed To Improve Programs’ Efficiency (GAO/AIMD-97-122, July 29, 1997). Department of Education: Multiple, Nonintegrated Systems Hamper Management of Student Financial Aid Programs (GAO/T-HEHS/AIMD-97-132, May 15, 1997). High-Risk Series: Student Financial Aid (GAO/HR-97-11, Feb. 1997). Reporting of Student Loan Enrollment Status (GAO/HEHS-97-44R, Feb. 6, 1997). Department of Education: Status of Actions To Improve the Management of Student Financial Aid (GAO/HEHS-96-143, July 12, 1996). Student Financial Aid: Data Not Fully Utilized To Identify Inappropriately Awarded Loans and Grants (GAO/T-HEHS-95-199, July 12, 1995). Student Financial Aid: Data Not Fully Utilized to Identify Inappropriately Awarded Loans and Grants (GAO/HEHS-95-89, July 11, 1995). Federal Family Education Loan Information System: Weak Computer Controls Increase Risk of Unauthorized Access to Sensitive Data (GAO/AIMD-95-117, June 12, 1995). Financial Audit: Federal Family Education Loan Program’s Financial Statements for Fiscal Years 1993 and 1992 (GAO/AIMD-94-131, June 30, 1994). Financial Management: Education’s Student Loan Program Controls Over Lenders Need Improvement (GAO/AIMD-93-33, Sept. 9, 1993). Financial Audit: Guaranteed Student Loan Program’s Internal Controls and Structure Need Improvement (GAO/AFMD-93-20, March 16, 1993). Department of Education: Management Commitment Needed To Improve Information Resources Management (GAO/IMTEC-92-17, April 20, 1992). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed its review of the Department of Education's progress in integrating its National Student Loan Data System (NSLDS) with other student financial aid systems, as required by law. GAO noted that: (1) the Department of Education has made only limited progress in integrating NSLDS with the other student financial aid systems that support title IV programs; (2) this is largely because the Department has not developed an overall systems architecture, a framework needed to allow these disparate systems to operate in concert with each other; (3) as a result, while information can be shared among systems, the process is cumbersome, expensive, and unreliable; (4) further, the lack of a systems architecture allows the proliferation of individual stand-alone systems; (5) this is expensive, not only with respect to system procurement, operation, and maintenance, but also in terms of efficiency; (6) such an approach has served immediate program needs on a limited basis, but undermines sharing of student financial aid information across programs; and (7) this, in turn, can result in different databases containing different and perhaps conflicting information on the status of student loan or grant.
The C-130 and KC-135 aircraft are important parts of DOD’s air mobility force. The C-130’s primary role is to provide airlift for theater cargo and personnel. The KC-135 aircraft is the Air Force’s core refueler. The majority of these aircraft are in the reserve component, as shown in table 1. Reserve component C-130 flying squadrons generally have 8 aircraft, and KC-135 squadrons generally have 10. Active force C-130 squadrons typically have 14 aircraft, while active KC-135 squadrons have 12. A reserve component wing comprises flying squadrons and other nonflying squadrons. Generally, reserve component wings have one flying squadron, unlike active wings, which generally have two to three flying squadrons. Some of the nonflying squadrons, such as maintenance, military police, and logistics squadrons, are directly related to the flying squadron, while others, such as medical, civil engineering, and communications squadrons, are not directly related. Reserve component C-130 and KC-135 aircraft are dispersed throughout the continental United States and Hawaii and Alaska. There are 22 states whose National Guard wings have C-130 aircraft, and 19 states whose National Guard wings have KC-135 aircraft. Seven states have both. There are nine states that have Air Force Reserve wings with C-130 aircraft and five states that have Air Force Reserve wings with KC-135 aircraft. Seven of the Guard wings with KC-135 aircraft are located on military bases and 12 are located with civilian airports. All six Reserve wings with KC-135 aircraft are located on military bases. Most Guard wings with C-130 aircraft, 20 of 23, are located with civilian airports. Half of the 10 Reserve wings with C-130 aircraft are located on military bases, and the other half are with civilian airports. Several locations maintain both Guard and Reserve wings. Though reserve component members are sometimes thought of as weekend warriors, about one quarter to one third of wing personnel are full-time military or civilian employees. These personnel are concentrated in areas such as maintenance, logistics, and security squadrons and the wing staff. The balance of wing personnel are part-time military personnel and are likely to have full-time employment in addition to their military responsibilities. Larger-sized reserve component units would still be able to perform peacetime missions. When reserve component C-130 and KC-135 units have participated in peacetime deployments in Bosnia, Saudi Arabia, and Panama they have done so on a rotational basis. However, unlike the active Air Force, Reserve and Guard rotations are not assigned completely to a single flying squadron or wing, which makes the squadron size less important. In a typical reserve component rotation, while one wing is designated to lead the mission, it depends on many other wings to provide aircraft and personnel. For example, the reserve component was assigned to support operations in Saudi Arabia for a 3-month period. Personnel and aircraft from 19 Guard wings were typically rotated for 15 to 30 days to staff a flying squadron of 8 aircraft. A similar practice was used in Bosnia and Panama. Rotations are done in this manner because participation by reserve component members without a presidential call-up is voluntary. To obtain the complement of personnel needed, individual volunteers from many units are necessary. Thus, the number or size of units is not as important as the number of people that volunteer. Unit officials from several wings cited advantages in increasing the number of aircraft in a flying squadron. These included increased training opportunities and improved scheduling flexibility. Creating fewer larger-sized flying squadrons should have little impact on wartime missions as well. Wartime requirements for C-130 and KC-135 aircraft are not typically defined by the number of squadrons or wings but by the number of aircraft. For example, the July 1996 Joint Chief of Staff’s Intratheater Lift Analysis expresses C-130 requirements in terms of aircraft, not wings or squadrons. The recent C-130 Airlift Master Stationing Plan also expresses requirements in terms of the number of C-130 aircraft. Moreover, the study also states that the current C-130 inventory exceeds requirements, which we believe further lessens the impact of eliminating squadrons. The manner in which the Air Force plans to use reserve component units in wartime also minimizes the impact of reducing the number of flying squadrons. According to planning officials from the Air Combat Command and the Air Mobility Command, because active Air Force units are available immediately, they are typically tasked as lead units to provide the command and control in theater for wartime deployments. Reserve component flying squadrons generally follow active Air Force units and are placed under their command structure. These officials stated that planners partly assign existing reserve component flying squadrons in war plans by matching the capacity at likely deployment locations with the squadrons available in the reserve component inventory. They said that as long as the total number of aircraft available to perform missions remained the same, they could change assignments based on larger-sized squadrons. Further, an Air Force official stated that while the Air Force prefers to assign aircraft by squadrons for planning purposes, flying squadrons’ aircraft can be split, provided a command structure is in place. Unit officials stated that during Operation Desert Shield, reserve component aircraft and personnel were used in this manner. Moreover, in current deployment plans we viewed, one KC-135 flying squadron was split between two locations. Although squadrons are assigned to wings in peacetime, war plans described to us did not call for these wings to deploy or operate together. For example, civil engineer, medical, and security police squadrons may operate separately from the flying squadron. Wing officials stated that the Air Force has moved away from activating entire reserve component units; instead, war-fighting commanders choose packages of equipment and personnel that will meet their requirements for the mission at hand. At several wings we visited, officials stated that they had not deployed as a wing and were unaware of any plan to deploy as a wing. Further, many wing staff, including the wing commander, are not tasked in war plans and do not have a specific supporting mission. Redistributing the reserve’s component C-130 and KC-135 existing aircraft into fewer, larger squadrons and wings would reduce operating costs. For example, redistributing 16 C-130 aircraft from two 8-aircraft wings to one 16-aircraft wing would save about $11 million dollars annually, primarily from personnel savings. This reorganization could eliminate about 155 full-time positions and 245 part-time positions. The decrease in full-time positions is especially significant, since the savings associated with these positions represents about $8 million, or 75 percent, of the total savings. Fewer people would be needed in areas such as wing headquarters, logistics, operations, and support group staffs as well as maintenance, support, and military police squadrons. Appendix II describes the organization of a typical wing and how redistributing aircraft would affect the wing. In many cases eliminating the aircraft from a wing could also generate savings additional to operating savings. For example, civil engineering and medical squadrons, which help to support the wing and base in peacetime, are not directly related to the aircraft. If the wing is inactivated, these units’ worldwide requirements would have to be reexamined to determine whether they were still needed in the force structure. When the Reserve inactivated a C-130 wing in 1997, all eight of the nonflying squadrons not directly related to the aircraft were eliminated from the force structure, which involved about 140 full-time and about 625 part-time drill positions. Using average Air Force Reserve full- and part-time pay rates, these eliminations represent about $12 million in annual salaries. We developed five options for redistributing the existing reserve component C-130 and KC-135 aircraft into larger-sized squadrons that show a gradual increase in savings in operating costs—from $51 million to $209 million annually. Our options are not the only ones possible, but they do illustrate the significance of the savings that can be achieved through a redistribution of the aircraft. The options base like model aircraft together and involve the same number of aircraft as are now planned for the reserve component. In developing our options, we considered the two factors that reserve component officials cited as most important to successful reorganization: adequate recruiting potential and facility capacity. We also evaluated how three other issues could affect our options: one-time costs of redistributing the aircraft, the significance of the geographical location of the aircraft, and the effect that eliminating squadrons would have on states’ abilities to respond to domestic crises. We developed five options that redistributed aircraft from existing C-130 and KC-135 flying squadrons to other squadrons. The first option required the least reorganizing, increasing the number of squadrons with fewer than 10 aircraft to that level. This reorganization would be achieved by redistributing aircraft from three C-130 squadrons and one KC-135 squadron to other squadrons. Our fifth option increased the squadron size to 16 aircraft for the C-130 and 12 for the KC-135 by redistributing aircraft from 13 C-130 squadrons and 5 KC-135 squadrons to other squadrons. A detailed discussion of each option is in appendix I. Our analysis of data provided by Guard and Reserve recruiting officials demonstrates that a sufficient number of personnel could likely be recruited to meet increased requirements of larger squadrons in most locations. Air Reserve headquarters recruiters estimated that they could recruit enough personnel to support 16 C-130 aircraft at 8 of their current 10 locations. Guard headquarters recruiters estimated it could recruit an adequate number of personnel to support 16 aircraft at 9 of 23 C-130 locations. For KC-135 aircraft locations, Air Reserve officials estimated it could recruit enough personnel to support a 12-aircraft squadron at five of its six locations, with two locations capable of adding an entire 10-aircraft squadron. Guard recruiting estimates for the KC-135 indicate that 12-aircraft squadrons could be supported at 15 of 19 locations. Headquarters officials stated that for some options that double the sizes of existing flying squadrons, additional recruiters would be required for at least 6 years at affected locations. Reserve component officials at units we visited were more optimistic about their ability to recruit additional personnel than were headquarters officials. The four C-130 wings we visited estimated that they could add four additional aircraft with little or no problem. While headquarters recruiters estimated adding four C-130 aircraft at some locations could take several years to fully staff, unit officials estimated that recruiting additional personnel for the same number of aircraft would usually take no longer than 18 to 24 months. Recruiters also indicated that recruiting additional personnel for more than four aircraft at a given location would be more challenging but possible, if additional experienced recruiters were added to the wing receiving the aircraft and if spaces were available at schools to train new recruits. According to reserve component recruiters, the outlook for recruiting could improve if full- and part-time personnel moved with the aircraft. In the recent move of four C-130 aircraft from Chicago, Illinois, to Milwaukee, Wisconsin, about 200 part-time personnel relocated to Milwaukee. Reserve component officials believe it is probable that many personnel from wings clustered closely would move with the aircraft if the aircraft were moved to a nearby location. The outlook for recruiting could improve further if personnel from the C-141 fleet, which is being phased out of the inventory, could be used to support C-130 and KC-135 aircraft. The reserve component provides personnel to support most of the C-141 fleet of about 160 aircraft. Only about three-quarters of the C-141 aircraft will be replaced with C-17 aircraft. Some current Reserve units are not scheduled to become C-17 or any other Reserve units. Thus, this trained pool of personnel could be available for C-130 or KC-135 aircraft. Reserve officials have been actively seeking a role for these personnel. Our analysis of facility data provided by reserve component civil engineering officials shows that many bases could absorb additional aircraft at little or no cost. According to these officials, 38 locations could increase the number of assigned aircraft with no military construction costs. In other cases, ramp and hangar space would need to be increased slightly. Also, some locations may require increased administrative and supply space. Only in very few instances would locations require completely new facilities, such as additional hangars. All of the National Guard KC-135 wings could expand to at least 12 aircraft (3 already have more than 12 aircraft) with one-time construction costs of no more than $6 million. For most Guard C-130 wings, the military construction costs would be no more than a $1 million for increasing from 8 to 12 aircraft. Also, 17 of 23 locations could accommodate 16 aircraft at a cost ranging from $1 million to about $10 million. Expansion is possible at only three of six Air Force Reserve wings where KC-135 aircraft are located. The Reserve estimates it could add up to 10 additional aircraft at two of the three locations at a one-time cost of $1 million per squadron. The Reserve has two locations with 16 C-130 aircraft. With an investment of $1.5 million to $5.5 million per location, the Reserve could accommodate 16 aircraft at five of its other eight C-130 locations. Before developing our options, we considered whether any mission requirements would preclude C-130 and KC-135 aircraft moving from their current locations. We were told that only four had unique missions. Other than those locations, Guard and Reserve officials stated that airlift and refueling missions could be accomplished from a number of locations as long as some general geographical requirements were met. For instance, tankers meet their customers off the east and west coasts in a high concentration of areas to facilitate the movement of aircraft over the Atlantic and Pacific Oceans. Thus, some refueling aircraft should be located in proximity to these areas. These officials also believe that it is important to maintain reserve component KC-135 aircraft in the northeast because active duty KC-135 aircraft are no longer based in this region. Some officials told us that KC-135 and C-130 aircraft should be based close to the units they train with, whether with other aircraft units—as in the case of KC-135 aircraft—or Army units for the C-130 aircraft. Although we could not determine one-time costs of consolidating C-130 and KC-135 aircraft in larger squadrons, we do not believe these costs to be significant relative to expected savings. Our options would result in some initial costs for such things as training for additional people hired at a location gaining aircraft and for transferring some personnel from one location to another. In some cases, personnel could be eligible for severance pay if their position was eliminated. Reserve component officials could not provide estimates of these costs, which would vary depending on how many trained personnel might relocate with the aircraft and how much of the relocation expenses the Air Force would pay. Because we did not identify specific bases in our options, it is difficult to determine these costs. However, during the 1995 base realignment and closure process, initial implementation costs to move C-130 aircraft from three locations were estimated to be offset in 1 year for two of the three locations and 3 years for the third. According to reserve component officials, these implementation costs could be minimized in several ways, for example, by moving aircraft to nearby bases and allowing recruiters sufficient time to phase in additional personnel. National Guard units are unique in that they are under state control when not federalized. These assets are available to governors during emergencies and disasters. For this reason, inactivating Guard units has historically caused concern. However, not all states have C-130 or KC-135 aircraft in their Guard units. In 16 states, no Guard units are equipped with C-130 or KC-135 aircraft. We recognize that some of our options would likely eliminate National Guard wings in some states, but these states could still receive assistance during disasters and emergencies. States can receive assistance from other states’ National Guard units in several ways, for example, through state compacts, federal laws, DOD regulations, and informal agreements. Compacts, which are agreements between states to support one another in times of need, are one way that assistance can be provided. One of the most inclusive compacts is the Emergency Management Assistance Compact, which was originally sponsored and established by the Southern Governors’ Association in 1992. Under this compact, member states agree to provide for mutual assistance in managing any declared emergency or disaster as well as mutual cooperation in exercises and training. Through this compact, members agree on issues such as terms of liability, compensation, and reimbursement when emergency assistance is provided to member states. The compact was endorsed by the National Governors’ Association and other regional and national organizations, and any state can now become a member. Currently, 20 states have joined the compact. While a National Guard official stated that no C-130 or KC-135 aircraft have yet been used under this compact, other assets such as helicopters have been shared. For example, Virginia, Florida, and Kentucky have provided helicopters to other states. States can also receive assistance during a natural disaster or emergency through the Robert T. Stafford Disaster Relief and Emergency Assistance Act, which authorizes the Federal Emergency Management Agency to assign missions to any federal agency if the President declares a federal emergency or disaster. Under the act, the agency can provide federal assets, including National Guard and active duty personnel and equipment, to states that are experiencing the emergency or disaster. For example, C-130 aircraft from a National Guard unit in Maryland assisted Florida, which has no C-130 aircraft in its National Guard, in its efforts to reduce the effects of Hurricane Andrew. Another way states can receive assistance is under a recently implemented Defense Department directive referred to as “innovative readiness training.” Under this directive, Defense assets can be used to assist states and communities if the assistance provides a training opportunity related to units’ wartime missions. In this case, the Guard can authorize units to participate even if a federal disaster is not declared. For example, we were told by Guard officials that Guard units from outside Iowa received training in water purification during floods in Iowa. Beyond these provisions, National Guard officials stated that assistance can be coordinated through the National Guard Bureau, even without an agreement among the states. To reduce response time, Guard officials sometimes develop preliminary plans for providing assistance when a major disaster is pending. For example, before Hurricane Iniki struck Hawaii, California National Guard, National Guard Bureau, and Hawaii National Guard officials coordinated relief efforts to allow California Guard units’ C-130 aircraft to be prepared to provide assistance there, even though no formal agreement existed between the two states. The reserve components’ C-130 and KC-135 aircraft can be redistributed into larger-sized squadrons and still accomplish their peacetime and wartime missions. Such a reorganization would result in significant savings that could be used to partially fund the modernization of the Defense Department’s force. Therefore, we recommend that you direct the Secretary of the Air Force to develop a plan to organize the C-130 and KC-135 aircraft in the Air National Guard and Air Force Reserve into larger wings at fewer locations and seek congressional support for the plan. As you know, 31 U.S.C. 720 requires you to submit a written statement on actions taken on this recommendation to the Senate Committee on Governmental Affairs and the House Committee on Government Reform and Oversight not later than 60 days after the date of the report and to the Senate and House Committees on Appropriations with the agency’s first request for appropriations made more than 60 days after the date of the report. DOD generally concurred with our findings but non-concurred with our recommendation. The Department agrees that reorganizing aircraft at fewer locations could reduce costs while still allowing the Air Force to meet its commitments but it pointed out that other factors must also be weighed in any reorganization plan. DOD disagreed that it should develop a specific plan to consolidate at this time. The Department observed that some options could involve base closures and/or realignment of military installations and the Department intends to seek legislative authority to close and realign installations in conjunction with its fiscal year 1999 budget. DOD believes that it would be premature to develop a plan until Congress acts on the Department’s proposal. We recognize that many factors are involved in reorganizing aircraft locations and we assume that the Air Force would take these factors into account in developing a reorganization plan. We also recognize that some options could have base closure and realignment implications, and that DOD’s authority in this area is subject to the requirements of 10 U.S.C. 2687. However, the range of options available to the Secretary is broad, and many options would entail reductions that would not be subject to these requirements. Because DOD agrees that there are cost reductions associated with reorganizing C-130 and KC-135 aircraft into larger-sized squadrons and wings, we believe that the Air Force should not delay in developing a reorganization plan and seek congressional support for that plan. A detailed explanation of our scope and methodology appears in appendix III, and DOD’s comments are reproduced in appendix IV. We are sending copies of this report to the Secretary of the Air Force and interested congressional committees. We will also make copies available to others upon request. Please contact me at 512-3504 if you or your staff have any questions concerning this report. Major contributors to this report are listed in appendix V. We developed five options for organizing reserve component C-130 and KC-135 aircraft more cost-effectively into fewer, larger-sized squadrons. In developing these options, we incrementally increased the number of aircraft per squadron for each succeeding option with 16 aircraft as the limit for C-130 squadrons and 12 aircraft as the limit for KC-135 squadrons. For each option we developed, we assessed whether (1) the Guard and Reserve could recruit sufficient personnel to support additional aircraft and (2) sufficient existing locations had facilities that could be expanded to accommodate additional aircraft. We also varied the mix of Guard and Reserve aircraft slightly and limited consideration of units outside the continental United States in some options because these issues have been identified as sensitive. We based our recruiting assessments on data provided by Guard and Reserve officials. They rated their likely ability to increase personnel at all existing C-130 and KC-135 locations as (1) fully able to meet additional personnel requirements, (2) could meet personnel requirements with some difficulties, and (3) unlikely to meet additional requirements. We based our facility expansion assessments on civil engineering estimates provided by Guard and Reserve officials. We rated locations as low cost if expansion could be accommodated for $3 million or less, medium cost if expansion could be accommodated for $3 million to $10 million, high cost if expansion could be accommodated for over $10 million. To calculate savings, we determined the total operating costs for larger-sized units in our options and compared them to the baseline costs for the smaller-sized units. We did not determine option-specific one-time implementation costs for military construction or other costs. Our options show possible annual savings from $51 million to $209 million, as shown in table I.1. In our five options, we eliminated from 3 to 13 C-130 flying squadrons from the reserve components’ current number of 34 squadrons. We did not reduce the number of C-130 aircraft already in the reserve component inventory. Option one increased almost half of the flying squadrons with less than 10 aircraft to that level. Aircraft located outside the continental United States were not considered in the analysis for this option. There were sufficient locations with the capability to recruit personnel to fully meet personnel requirements in most cases and to expand facilities at low cost. Six aircraft shifted from the Guard to the Reserve. Three squadrons were eliminated, and a net of 12 squadrons would increase in size. This option would save about $35 million annually. Option two increased some squadrons with less than 12 aircraft to that level. There were sufficient locations with capabilities to recruit personnel to fully meet personnel requirements in most cases and to expand facilities at low cost. Four aircraft moved to the Guard from the Reserve, 6 squadrons were eliminated, and a net of 12 squadrons would increase in size. This alternative would save about $66 million annually. Option three increased many squadrons with less than 14 aircraft to that level. Most locations would be able to recruit personnel to fully meet personnel requirements, but recruiting would be challenging at some locations. Most facility needs could be met at low cost, but a few locations could expand only at medium cost. Eight aircraft moved from the Guard to the Reserve, 10 squadrons were eliminated, and a net of 14 squadrons would increase in size. This option would save about $110 million annually. Option four increased some of the squadrons to a maximum of 16 aircraft. Recruiting would be challenging at more locations than in option three, but most facility needs could be met at low cost, with some locations able to expand at medium cost. Two aircraft moved from the Guard to the Reserve, 12 squadrons were eliminated, and a net of 15 squadrons would increase in size. This option would save slightly more than $130 million annually. Option five maximized the number of flying squadrons with 16 aircraft. The recruiting and facility situations were about the same as in option four, with some recruiting challenges and facility expansion possible at medium cost in some areas. Eight aircraft moved from the Guard to the Reserve, 13 squadrons were eliminated, and a net of 14 squadrons would increase in size. This option saved about the same amount as option four, $130 million annually. Table I.2 shows the Air Force’s current basing plan for its squadrons of C-130 aircraft and the reorganization of the aircraft in our five options. In our five options, we eliminated from 1 to 5 KC-135 flying squadrons from the reserve components’ current number of 29 squadrons but did not reduce the number of KC-135 aircraft already in the reserve components’ inventories. We did not reduce the number of aircraft from the four locations that the Air National Guard considered mission unique in any of our options. For option one, we increased most squadrons with less than 10 aircraft to that level. There were sufficient locations with adequate capabilities to recruit personnel to fully meet requirements with one exception, where recruiting would be challenging. Facilities could be expanded at low cost at every location. One squadron was eliminated, and a net of seven squadrons would increase in size. This option would save about $16 million annually. Option two increased all squadrons but one to a minimum of 10 aircraft. There were sufficient locations with adequate capabilities to recruit personnel to fully meet requirements with one exception, where recruiting would be challenging. Facilities could be expanded at low cost at every location. Four aircraft were shifted from the Guard to the Reserve, two squadrons were eliminated, and a net of 10 squadrons would increase in size. This option would save about $32 million annually. Option three increased most squadrons to 11 aircraft. For a few locations, recruitment would be challenging, but for all others there was adequate capability to recruit personnel to fully meet requirements. Facilities could be expanded at low cost at all but two locations, where expansion was possible at medium cost at one and at high cost at the other. Six aircraft were shifted from the Guard to the Reserve, 4 squadrons were eliminated, and a net of 20 squadrons would increase in size. This option would save about $66 million annually. Option four increased many squadrons to 12 aircraft. There was adequate capability to recruit personnel to fully meet requirements at most locations, and facilities could be expanded at low cost. Five squadrons were eliminated, and a net of 16 squadrons would increase in size. This option would save about $77 million annually. Option five maximized the number of squadrons with 12 aircraft and minimized the number of locations. Most locations were capable of fully meeting personnel requirements, with recruiting more challenging, but possible, at several locations. Most locations could expand facilities at low cost, with expansion at one location possible at medium cost and at another location at high cost. Ten aircraft were shifted from the Guard to the Reserve, 5 squadrons were eliminated, and a net of 16 squadrons would increase in size. This option would save about $79 million annually. Table I.3 shows the Air Force’s current basing plan for its squadrons of KC-135 aircraft and the reorganization of the aircraft in our five options. Organizing existing C-130 and KC-135 aircraft into fewer wings could result in significant savings due to reductions in personnel positions. These reductions would primarily be in squadrons directly related to each aircraft, since much of the overhead at locations losing aircraft would no longer be needed; the Air Force would have to determine the disposition of squadrons not directly related to the flying squadrons. Also, squadrons with duplicative functions could be eliminated. According to data provided by Guard and Reserve program officials, only small increases in positions would be necessary at existing locations receiving additional aircraft. Figure II.1 shows the major elements of a typical wing structure. The following sections describe each main organization typically in a wing and the effect that consolidation is likely to have on its personnel requirements. Actual locations may have additional squadrons in the wing that are not directly related to the aircraft. The wing headquarters includes the wing commander and staff that develop operational plans; evaluate exercises; and provide financial, legal, safety, public affairs, historical, and other services. If a wing loses its only flying squadron, the wing headquarters would likely be eliminated. The wing headquarter’s staff would not need to increase if the squadrons’ aircraft increase from 8 to 12. The number of full-time staff would increase slightly. The operations group comprises a commander and a small staff that supervise the flying squadron and the operations support flight. The flying squadron is staffed with pilots and a crew that operate the aircraft and are in a fixed ratio to the number of aircraft. The operations support flight provides intelligence, scheduling, combat tactics, training, air crew life support, airfield and air traffic operations, and weather support to the flying squadron. If a wing loses its flying squadron, the operations group would be eliminated. As shown in table II.1, the wing that receives a 50-percent increase in aircraft would need to increase its flying squadron personnel by 42 percent. Full-time staff would increase about the same percentage. The other squadrons would increase minimally. The logistics group commander and staff oversee the aircraft generation squadron, maintenance squadron, logistics squadron, and the logistics support squadron. The aircraft generation squadron handles flight line maintenance and related tasks, and the maintenance squadron handles more substantial repairs. The logistics squadron manages transportation vehicles and other base-owned equipment. The logistics support squadron manages engines and training. All of these squadrons are directly related to the performance of the aircraft. If a wing loses it flying squadron, the logistics group would be eliminated. The wing that receives a 50-percent increase in aircraft, from 8 to 12, would have to increase its aircraft generation squadron and maintenance squadron personnel by about 25 percent. Full-time staff would increase by a slightly greater percentage. Other organizations would be affected only slightly. The support group includes the mission support squadron and the security police squadron, which are directly related to the aircraft, and elements that provide base and other support services, such as the communications flight, civil engineer, and services flight squadrons. If a wing loses its flying squadron, the support group would be eliminated. The mission support squadron and security police at the receiving wing would not increase if the number of aircraft increased from 8 to 12. The civil engineering squadron and the communication and services flights are not directly tied to the aircraft, and their disposition would have to be determined by the Air Force. The medical squadron provides family practice, inpatient, and medical nursing and emergency room, mental health, pharmaceutical, and dental services. In the reserve component, one squadron may be the only organization in the group. This squadron is not directly related to the aircraft and would be unaffected at a receiving unit if additional aircraft were assigned. The disposition of the medical squadron losing aircraft would have to be determined by the Air Force. Table II.1 shows the impact of adding four additional aircraft to an eight-aircraft reserve component C-130 wing. Table II.1: Comparison of Squadron Staffing for an 8- and 12-Aircraft Unit increase (percent) increase (percent) We assessed whether the Air Force’s reserve component combat C-130 and KC-135 aircraft could feasibly be reorganized into fewer, larger-sized squadrons and wings. In making this assessment, we (1) determined the effect of a reorganization of the C-130 and KC-135 aircraft on mission accomplishment, (2) determined whether costs could be reduced through a restructuring of the aircraft squadrons, and (3) developed five possible options for increasing the number of aircraft in C-130 and KC-135 squadrons and analyzed their effect on operations and costs. We focused on combat-coded reserve component C-135 and C-130 aircraft. We did not include locations that had only special-mission versions of these aircraft, especially the C-130. To determine the effect of a reorganization of C-130 and KC-135 aircraft on mission accomplishment, we interviewed officials and obtained data from the Headquarters, Air National Guard, and the Office of the Air Force Reserve, in Washington, D.C.; the Air National Guard Readiness Center at Andrews Air Force Base, Maryland; and the Air Force Reserve Command, Robins Air Force Base, Georgia, in the following functional areas: recruiting, civil engineering, manpower, financial management, planning and programming, and training. We discussed legal provisions that would affect the relocation of existing reserve component flying squadrons with the Air National Guard General Counsel staff and Air Force Reserve planning staff. We also interviewed wing and squadron officials at the 135th Airlift Squadron at Martin State Airport, Maryland; the 133rd and 934th Airlift Wings at Minneapolis-St. Paul International Airport, Minnesota; the 302nd Airlift Wing, Peterson Air Force Base, Colorado; and the 163rd Air Refueling and 452nd Mobility Wings, March Air Force Reserve Base, California, to discuss the same functional areas listed above. These flying squadrons represent a cross section of reserve component basing arrangements. We examined a variety of Air Force and reserve component regulations, including those regarding facility requirements and staffing procedures. We interviewed officials at the Air Mobility Command, Scott Air Force Base, Illinois, and the Air Combat Command, Langley Air Force Base, Virginia, to understand how reserve component assets would fit into the gaining command’s war plans and to obtain their perspectives on the effect of consolidations. To determine whether costs could be reduced through a restructuring of the aircraft squadrons, we developed staffing estimates from data provided by reserve component officials that develop personnel requirements. In developing our estimates, we interviewed staffing and budget officials at the services’ headquarters, readiness centers, and the squadrons we visited. We also obtained wing staffing and budget reports for all squadrons and analyzed specific squadron staffing authorization documents for 12 squadrons of various sizes. At the squadrons we visited, we reviewed and discussed the number of assigned personnel and the squadron’s budgets and discussed their estimates of the personnel increases and facility additions that might be needed to accommodate additional aircraft. Since over 70 percent of the operating costs and almost all of the estimated savings are associated with military and civilian personnel, we primarily analyzed the reasonableness of the services’ personnel and salary planning factors. We found their estimates to be reasonable. We provided these staffing estimates to Air Force officials to use in its SABLE model. Our savings estimates include only the savings from reduced operating costs that are directly related to each aircraft and do not include any military construction, base closure, and other fixed or indirect costs and savings that may be associated with transferring aircraft from one location to another. To determine the feasibility of increasing the number of aircraft in C-130 and KC-135 squadrons at various locations, we examined the reserve component’s submissions on capacity to the 1995 Base Realignment and Closure Commission. The reserve component’s headquarters civil engineering branches provided more current estimates of the estimated capacity of each squadron and the cost to increase the capacity. During visits to C-130 and KC-135 wings, we obtained civil engineering estimates on each location’s ability to expand, the facilities needed, and the accompanying cost to ensure that data provided from headquarters was reliable. To determine the reserve component’s capability to recruit additional personnel needed to organize wings with additional aircraft, we obtained assessments from the Air National Guard’s and the Air Force Reserve’s headquarters recruitment staff. These personnel provided estimates for each location’s ability to support additional personnel for incremental aircraft increases. We also factored in personnel readiness standards used by the Department of Defense. A more complete discussion of the methodology used in developing options is included in appendix I. For the four C-130 and two KC-135 squadrons we visited, we used the squadron’s recruiting potential according to headquarter’s estimates and assessed its consistency with the local recruiting office’s estimate of its ability to recruit an adequate number of people for an increase in aircraft at its location. To estimate one-time costs for facility improvements, we obtained cost estimates from the reserve component’s civil engineering headquarters for each location. We checked these estimates against those made by local civil engineering personnel at the squadrons we visited. To estimate relocation and separation expenses, we examined 1995 base closure estimates on permanent change of station and separation costs for civilians and military personnel. We also interviewed reserve component training personnel to gain an understanding of the expected changes in training demand due to consolidation. We conducted our review from July 1996 to September 1997 in accordance with generally accepted government auditing standards. Dan Omahen, Senior Evaluator Mary Jo LaCasse, Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO assessed the cost-effectiveness of organizing the Air Force's airlift and refueling force into fewer, larger-sized squadrons and wings, focusing on: (1) the effect that reorganization may have on mission accomplishment; (2) whether costs could be reduced through redistributing aircraft among fewer wings; and (3) five possible options for redistributing C-130 and KC-135 aircraft among fewer wings at lower operating costs. GAO noted that: (1) the Air Force could reduce costs and meet peacetime and wartime commitments if it reorganized its C-130 and KC-135 aircraft into larger-sized squadrons and wings at fewer locations; (2) these savings would primarily result from fewer people being needed to operate these aircraft; (3) for the reorganization options GAO developed, up to $209 million dollars could be saved annually; (4) creating larger-sized squadrons and wings would still allow the Air Force to accomplish its peacetime and wartime missions with the existing number of aircraft; (5) in peacetime deployments, reserve component C-130 and KC-135 personnel do not participate as part of entire squadrons or wings but rather as individual volunteers; (6) thus, creating larger-sized squadrons and wings should not compromise these missions; (7) for wartime deployments, requirements for C-130 and KC-135 aircraft are typically stated by the number of aircraft rather than by squadrons or wings; (8) moreover, war plans where existing flying squadrons are assigned can be changed to accomodate larger-sized squadrons; (9) specific reserve component wings are not usually assigned in existing war plans; (10) thus, the impact of reducing them would be minimal; (11) redistributing the reserve component's C-130 and KC-135 existing aircraft into fewer, larger-sized squadrons and wings would reduce operating costs; (12) for example, redistributing 16 C-130 aircraft from two 8-aircraft wings to one 16-aircraft wing would save about $11 million annually, primarily from personnel savings; (13) GAO developed five options to illustrate the kind of savings that can be achieved by creating larger-sized squadrons; (14) these savings range from about $51 million to $209 million annually; and (15) sufficient personnel could be recruited and most locations' facilities could be inexpensively expanded to accomodate the unit sizes in GAO's options.
Under the Resource Conservation and Recovery Act, the Clean Air Act, and the Clean Water Act, the federal government has established requirements setting limits on emissions and discharges from municipal and private industrial facilities that might pollute the land, air, or water. EPA shares responsibility for administering and enforcing these requirements with the states that have been authorized to administer the permit programs. EPA’s implementing regulations cover activities such as setting levels and standards for air emissions, establishing effluent limitation guidelines for water discharges, evaluating the environmental impacts of air emissions, monitoring compliance with discharge limits for water permits, ensuring adequate public participation, and issuing permits or ensuring that state processes meet federal requirements for the issuance of permits. While EPA has retained oversight responsibility for these activities, it has authorized state, tribal, and local authorities to perform most activities related to issuing permits to industrial facilities. These authorities— referred to as permitting authorities—receive federal funding from EPA to carry out these activities and must adopt standards that are equivalent to or more stringent than the federal standards. Title VI of the Civil Rights Act and EPA’s Title VI implementing regulations prohibit permitting authorities from taking actions that are intentionally discriminatory or that have a discriminatory effect based on race, color, or national origin. EPA’s Title VI regulations allow citizens to file administrative complaints with EPA that allege discrimination by programs or activities receiving EPA funding [40 C.F.R.§§7.120(1998)]. Title VI complaints must be filed in response to a specific action, such as the issuance of a permit. Because they must be linked to the actions of the recipients of federal funds, complaints alleging discrimination in the permitting process are filed against the permitting authority, rather than the facility receiving the permit. Complaints may be based on one permitting action or may relate to several actions or facilities that together have allegedly had an adverse disparate impact. Neither the filing of a Title VI complaint nor the acceptance of one for investigation by EPA stays the permit at issue. As of February 7, 2002, EPA’s complaint system showed 44 pending complaints alleging that state agencies had taken actions resulting in adverse environmental impacts that disproportionately affected protected groups. Of these complaints, 30 involved complaints that were accepted by EPA and were related to permits allowing proposed facilities to operate at a specified level of emissions. Other complaints involved issues such as cleanup enforcement and compliance. The 15 facilities covered in our study included waste treatment plants, recycling operations, landfills, chemical plants, and packaging facilities. These facilities were in nine locations, and some were in rural areas, while others were in urban areas. (See app. II for additional information on the location and description of the facilities). All of the facilities that we studied reported that they had provided jobs as a result of the creation or expansion of their facility. As shown in table 1, the number of full-time jobs ranged from 4 to 103 per facility, with 9 of the facilities having 25 jobs or less. Most of the facilities involved waste-related operations, which generally employ small numbers of employees. For four of the facilities, information was available from documents prepared before the facilities began operating on the number of jobs the facilities had estimated they would provide. In each of these cases, the number of jobs estimated was greater than the number of jobs provided. Specifically, Genesee Power Station estimated it would provide 30 jobs and provided 25; ExxonMobil estimated it would provide 50 jobs and provided 40; Natural Resources Recovery estimated it would provide between 15 and 40 jobs and provided 6; and Safety-Kleen, Inc., estimated it would provide 50 jobs in Westmoreland and provided 22. Officials from a few facilities told us that their facilities, in addition to providing jobs directly, generated additional jobs outside of the facility. For example, a document from ExxonMobil indicated that for every job provided at the plant, 4.6 jobs resulted elsewhere in the East Baton Rouge Parish economy. Also, Chemical Waste Management officials told us that their landfill increased business in the area and that this enhanced business could result in more jobs. We did not verify the facilities’ estimates of jobs generated outside of the facility. In some cases, the number of jobs at these facilities decreased over time. For example, jobs at the chemical waste facility in Kettleman City, California, decreased from 200 in 1990 to 103 in 2002; and jobs at a similar facility in Buttonwillow, California, decreased from 110 in 1987 to 23 in 2002. In addition, jobs at a fertilizer plant in New York decreased from 80 in 1993 to 39 in 2002. Officials from the two facilities in California told us that the changes resulted from a decreased demand for the facilities due to a reduction in the amount of waste generated by a more environmentally conscious public. We obtained information on the salary ranges and types of jobs provided for 14 of the 15 facilities. According to officials at these facilities, the salaries for the jobs provided varied from about $15,000 to $80,000 per year, depending on factors such as the type of work and the location of the facility. However, the information that the facilities provided was not detailed enough to allow us to determine the numbers for each job type, the salaries for individual jobs, or the number of jobs filled by people from the surrounding communities. The information indicates a wide range of salaries; however, community organizations in some locations told us that, in their view, the majority of the jobs filled by community residents were low paying. The facilities provided the following information: The ExxonMobil Corporation told us that their facility in Louisiana had both hourly and salaried jobs. According to ExxonMobil, its hourly jobs included mechanics, electricians, and laboratory technicians; and its average wage was about $23 an hour, which is equivalent to $47,840 per year. Salaried jobs included engineers, a chemist, accountants, and administrative assistants, and the average salary was just under $70,000 annually. The Texas Industries Package Plant, located in Texas, told us that its jobs included plant manager, sales representative, dispatcher, packaging coordinator, maintenance mechanic, plant operator, crew operators, crew members, and administrative positions. The salaries ranged from about $10 to $15 per hour, which is equivalent to $20,800 and $31,200 per year, respectively. The three hazardous waste treatment facilities in California reported that the jobs at their facilities—facility manager, manager, heavy equipment operators, plant operators, truck receiving operators, customer service representatives, and waste acceptance specialists— had salaries ranging from $28,000 to $82,000 annually. The nine nonhazardous waste-related facilities located in Connecticut, Louisiana, Michigan, and New York reported having jobs that included facility site managers, site supervisors, scale and machine operators, technical assistants, mechanics, and laborers. Salaries for these jobs ranged from $7.50 to $33.50 per hour, which is equivalent to $15,600 and $69,680 per year, respectively. About half of the facilities provided some information on whether their jobs were filled by people from the communities. Specifically, according to information provided by the Hunts Point, South Bronx, New York facilities, a large number of employees in the waste-related facilities resided in the Bronx. The Hunts Point Water Pollution Control Plant had 67 employees from the Bronx, with 1 living in the immediate Hunts Point neighborhood. Safety-Kleen, Inc., told us that the majority of the employees in its two facilities lived in the county where the facilities were located. Over the years of the Genesee Power Station’s operation, about half of the 68 employees resided in Flint or Genesee County, Michigan; however, the facility did not indicate how many employees, if any, lived in Genesee Township—the home of the power station—or the Flint community that is close to the plant. Similarly, information provided by the Texas Industries Package Plant in Austin, Texas, indicated that its 10 employees all resided in a nearby community, town, or city but did not identify the number from the community immediately surrounding the plant. And in a 1998 document submitted to EPA, Natural Resources Recovery, Inc., indicated that four of its five employees lived in the town where the plant was located. However, community representatives told us that few, if any, town residents worked at the landfill at the time of our visit. As shown in table 2, officials from 10 of the 15 facilities said they had contributed to the communities in which they were located. Specifically, they performed volunteer work that included offering firefighting assistance and organizing cleanups in the area. They also made infrastructure improvements, such as installing a new water drainage system. In addition, some of the facilities made or were planning to make financial contributions in the communities where they were located. These financial contributions would assist schools and universities as well as community groups and other organizations. For example, the Genesee Power Station awarded eight $1,000 scholarships to high school students. In three communities, the facilities established foundations or funds to manage and disburse the financial contributions. One foundation was set up following legal action taken by community groups. In another case, the foundation was not linked to legal action. The fund resulted from collaboration among the state environmental agency, the facility, and the community that ultimately resulted in the community dropping its complaint with EPA. The facilities and community groups in these three locations provided the following information: The Kettleman City Foundation, a California nonprofit public benefit corporation, was set up after legal action was taken by the community against Chemical Waste Management. The foundation was organized to improve the quality of life of the residents of Kettleman City and nearby areas of Kings County, California, by developing capacity, leveraging additional resources, and protecting the environment and residents’ health and welfare. The board of this foundation consisted of the legal representative for the Center on Race, Poverty, and the Environment; three community residents; and the manager of the Chemical Waste Management facility. Chemical Waste Management provided $115,000 to the foundation. In addition, Chemical Waste Management agreed to make further contributions annually, based on tons of municipal waste disposed at its landfill. Since 1998, Chemical Waste Management has contributed almost $300,000 to the foundation. Some of these funds are to be used to help build the Kettleman City Community Center, which plans to provide a variety of social services. The Buttonwillow Community Foundation was established in June 1994. The directors of the foundation included representatives from local government offices, the Chamber of Commerce, and a senior citizens’ group. This foundation’s primary function was to provide grants to facilitate projects promoting the health, education, recreation, safety and welfare of the Buttonwillow residents. Safety-Kleen, Inc., provided an initial $50,000 donation to the foundation. Its annual contribution to the foundation is linked to the tons of waste received at the facility, and in calendar years 2000 and 2001, these contributions exceeded $100,000. The North Meadow Municipal Landfill worked with the community to address the community’s concerns. Consequently, a fund called the Economic Development Account was established for economic development for minority business enterprises, social welfare projects, relief of the poor and underprivileged, environmental education, community revitalization, amelioration of public health concerns, and for other charitable purposes within Hartford. A board consisting of community group and city representatives would determine how to distribute funds from the account. At the time of our review, the facility had agreed to provide $9.7 million for the account over 10 years. In exchange for these contributions, the community group agreed to accept the landfill’s expansion and withdraw the complaint to EPA. Despite these efforts, community residents often felt these contributions were inadequate. Property values in a community are affected by many factors, including the condition of the land and houses; the proximity of the property to natural or manmade structures—such as the facilities covered by this study—that might be viewed as desirable or undesirable; and economic conditions in the surrounding or adjacent communities. Information on property values was not available for most of the communities where the facilities were located. For example, in some rural and unincorporated areas, information on property values was kept for a limited number of properties or was based on property sales, which were infrequent and had not occurred since the facilities had begun operating. Some information was available for two locations—Genesee Township, Michigan, and South Bronx, New York. Even in these two locations, the information available was not specific enough to isolate the effect of the facility on property values because of the other factors that can affect property values, such as the location of other manufacturing or waste- related facilities in the area or economic activity in adjacent areas. The Genesee Township tax assessor provided information showing that property values in the area had not changed. In the South Bronx, property assessment data indicated that property values had increased in the Hunts Point neighborhood—the neighborhood where multiple waste management facilities were located. For this case, local officials stated that the increase occurred because of factors such as expanding economic development and the rising cost of housing in Manhattan. In locations where property values were not available, community groups voiced concerns that the facilities would cause property values to decline. For example, residents of Alsen, Louisiana, believed that the location of nearby industrial facilities, including the facilities studied for this report, affected property values and reduced homeowners’ ability to sell their homes for a reasonable price. Similar concerns were included in the complaints regarding the hazardous waste landfills in California. The Alum Crest Acres Association, Inc., a community group in Columbus, Ohio, and the Garden Valley Neighborhood Association located near the Texas Industries Austin Package Plant also expressed concern about the effect of the industrial facilities on their property values. Six of the 15 facilities we studied said they used incentives or subsidies that were available in a particular area. Officials from these facilities also said that they chose their location because of low land costs, favorable zoning, or other factors. The incentives varied, depending on the type of facility and its location, but included tax exemptions, a local bond initiative, reductions in regulatory fees, and reduced utility rates. In Louisiana, the state granted ExxonMobil an industrial tax exemption from state, parish, and local taxes on property such as buildings, machinery, and equipment that were used as part of the manufacturing process. This exemption, which is available to any manufacturing company that builds or expands a facility within the state, is initially available for 5 years but may be renewed for an additional 5 years. According to the Louisiana Department of Economic Development, ExxonMobil’s polyolefin plant had received tax exemptions worth approximately $193 million between 1990 and June 2000. Also, in 2001, approximately $139 million was filed for the ad valorem tax exemption related to the Polypropylene project. The Buttonwillow and Westmoreland, California, hazardous waste facilities received a low-interest bond issued by the California Pollution Control Financing Authority in the amount of $19.5 million, and the facility in Kettleman City experienced a 40-percent reduction in regulatory fees as a result of provisions granted by the state in January 1998. In the latter case, facility representatives said the provisions were intended to help keep the facility from laying off employees. In the Hunts Point community in the South Bronx, the New York Organic Fertilizer Company was eligible for discount rates from the utility company—Consolidated Edison—because of its location. The utility company offered this incentive to any facility that located in a certain community and hired a percentage of employees from that community. Also, Tri Boro Fibers, a recycling company located in Hunts Point, received a local tax exemption that was available to all recycling facilities for trucking fees and certain purchases. Certain EPA units provided technical comments on a draft of this report. EPA’s Office of Civil Rights commented that the report needed (1) more detailed information on the number and types of jobs and on those jobs provided to the communities nearest the facilities and (2) a comparison of property values in the communities closest to the facilities to similar communities. As stated in the report, the facilities covered in this study were not required to provide information, however most of them voluntarily provided some job-related information. Facilities were not required to provide a specified number of jobs to receive permits to locate in a given area. A property value comparison would not have been possible considering the data limitation and accessibility issues that we identified. EPA generally agreed with the information about the agency and provided clarifications which we incorporated into this report where appropriate. As arranged with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of the report. At that time, we will send copies to the appropriate congressional committees and the Administrator of the Environmental Protection Agency. We will also make copies available to others on request. If you have any questions about this report, please contact Nancy Simmons, Assistant Director, or me at (202) 512-8678. Key contributors to this assignment are listed in appendix III. The objectives of this engagement were to (1) determine the number and types of jobs provided, (2) identify contributions made by the facilities to their communities, (3) determine the facilities’ effect, if any, on property values in their communities, and (4) determine the amount and type of government subsidies or incentive packages the facilities received. We did not examine the environmental impact of the facilities or the associated impact, if any, on the health of the communities in which they were located. We selected facilities for this engagement from the Environmental Protection Agency’s (EPA) complaint system. These complaints involved facilities that received environmental permits and were located in communities that felt the facilities’ operations were having a disproportionate impact on them. As of February 7, 2002, the system contained 44 complaints, of which EPA had accepted 36 for further review. As agreed with the requesters, we considered only facilities covered by complaints involving issues related to the permitting process (30 of the 36 accepted complaints). We initially selected three of the complaints, which involved three locations and eight of the facilities covered in our study. We found that 1 of these complaints involved 26 waste-related facilities. As agreed with our requestors’ staffs, we included 6 of the 26 facilities in the scope of this engagement. Subsequently, using geographic location, type of facility, and population density (urban versus rural), we selected seven additional complaints involving diverse facilities and locations. We found that two of these complaints involved facilities that were no longer in business; consequently, we excluded them from our analysis. The remaining five complaints involved six additional locations and seven facilities. Table 3 outlines the 9 locations and 15 facilities included in our study. To determine the number of jobs provided, the contributions the facilities made to the communities, and the impact on property values, we used a structured data collection instrument to interview officials from each facility and from state or local economic development and planning organizations. We asked for information such as the number of jobs provided over time, the number of jobs filled by people in the communities nearest the facilities, the types of jobs offered, and the salaries for each job. However, we did not examine whether the jobs represented a net increase in jobs within the community. Where available, we obtained property assessment information from local tax assessment offices. We also interviewed representatives from community and environmental action groups, some of which were involved in filing complaints with EPA. We analyzed documents pertaining to jobs at the facilities, property values before and after the facilities began operating or expanding, contributions to the community, and program planning; reviewed public hearings related to the issuance of environmental permits; and reviewed economic and demographic data. In general, we did not independently verify the information provided. To determine the subsidies or tax incentives that the facilities used, we interviewed officials from the facilities and from state or local economic development and planning organizations. We also reviewed documents obtained from these officials. We conducted our work between May 2001 and May 2002 in accordance with generally accepted government auditing standards. We obtained comments on a draft of this report from EPA officials. We also asked the representatives of some facilities with whom we consulted to review portions of the draft of this report for accuracy and clarity. Their comments are incorporated into this report as appropriate. Alsen is located along the Mississippi River near Baton Rouge, Louisiana, in an industrial corridor. Located along this corridor are facilities such as petrochemical plants that produce one-fifth of all U.S. petrochemicals, a lead smelter, a commercial hazardous waste incinerator, and landfills. Alsen is located in a rural area where the population is predominantly low income and African-American. Two of the facilities included in this report are located in Alsen—ExxonMobil and Natural Resources Recovery, Inc. The ExxonMobil facility produces both polyethylene and polypropylene (plastic) for textile, film, and automotive markets and is located in a cluster of petrochemical companies. The Louisiana Environmental Action Network and the North Baton Rouge Environmental Association filed a complaint with EPA against the Louisiana Department of Environmental Quality for issuing a permit for ExxonMobil’s expansion of an existing plant. According to officials at the facility, a $150-million expansion was initiated in 1998 and, with a capacity of 600 million pounds, will increase production to meet the growing demand for polypropylene. Natural Resources Recovery, Inc., is a construction and demolition debris landfill. The facility also recycles wood and construction material. As with ExxonMobil, Louisiana Environmental Action Network and North Baton Rouge Environmental Association filed a complaint with EPA against the Louisiana Department of Environmental Quality concerning Natural Resources Recovery, Inc. The residential population within the Hunts Point community consisted of about 12,000 people in 2000, many of whom were renters. Community residents are largely Hispanic and African-American, and many residents are low income. The community is home to many industrial facilities, including numerous waste treatment facilities. Six of the waste treatment facilities are included in this report—Waste Management (Truxton), Waste Management (Barretto), Tri Boro Fibers, Hunts Point Water Pollution Control Plant, New York Organic Fertilizer Company, and Tristate Transfer Associates Inc. Respectively, these facilities handle carting and demolition, transfer clean fill material, recycle nonhazardous waste, treat sewage, conduct thermal drying of biosolid waste, and collect garbage. Most of these facilities have operated since the 1980s and 1990s. These and other facilities are the subject of a complaint filed with EPA by U. S. Congressman Serrano and various Hunts Point community groups against the New York State Department of Environmental Conservation and New York City Department of Sanitation concerning the issuance of permits to operate existing and proposed facilities. These three communities are located in sparsely populated portions of Kern County, Imperial County, and Kings County, respectively. Residents of all three communities are predominantly Hispanic and low income. In addition, each of the communities is home to one of the three hazardous waste treatment facilities included in our study. Safety-Kleen, Inc.—the world’s largest recycler of automotive and industrial fluid wastes—operates the facilities located in Buttonwillow and Westmoreland. These facilities collect, process, recycle, and dispose of a range of hazardous wastes. The Buttonwillow facility, which accepts a wide range of EPA regulated hazardous and nonhazardous waste, has been operating since 1982. The area immediately surrounding the facility is irrigated agricultural and undeveloped land. Irrigated agriculture, oil production, and waste disposal are the predominant land uses for several miles around the facility, and the closest residence is about 3 miles away. The Westmoreland facility began operating in 1980 and also accepts a wide range of EPA regulated hazardous and nonhazardous waste. Like the Buttonwillow facility, the Westmoreland facility processes and disposes of both hazardous and nonhazardous waste. Chemical Waste Management operates the third facility, which is located about 4 miles from Kettleman City in Kings County, California. This facility provides hazardous waste treatment, storage, and disposal services to a variety of customers—including universities, government agencies, and private industry—throughout California and the western United States. In addition, the facility has a separate landfill that handles municipal solid waste generated from two counties. The Parents for Better Living of Buttonwillow, People for Clean Air and Water of Kettleman City, and Concerned Citizens of Westmoreland filed a complaint with EPA against the California Department of Toxic Substances Control and Imperial County Air Pollution Control District, regarding these three hazardous waste landfills. Genesee Township is a suburban area located in Genesee County and is adjacent to the city of Flint, which is the fourth-largest city in Michigan. Residents near the facility are largely low income and minority. The Genesee Power Station is a wood-burning power plant located in an industrial park within the township. Using waste wood, the plant produces electricity for a power company that services about 35,000 homes in Flint and Genesee Township. The area surrounding the plant includes a cement- making plant, an asphalt plant, a fuel storage facility, and a residential community. The Saint Francis Prayer Center filed a complaint with EPA against the Michigan Department of Environmental Quality regarding the issuance of a permit for the Genesee Power Station. Hartford is an urban area in central Connecticut. The North Meadow Municipal Landfill—one of the facilities covered in our study—has existed for over 75 years and is located in north Hartford in a community of about 35,000 people. The city of Hartford owns the landfill, which is run by the Connecticut Resource Recovery Authority. The facility is located in an area that abuts an industrial zone containing auto dealerships, the city’s public works garage, a junkyard, vacant buildings, and other industrial businesses. The neighborhood near the facility is largely minority and suffers from poorly maintained and abandoned buildings. The Organized North Easterner and Clay Hill and North End, Inc., filed a complaint with EPA against the Connecticut Department of Environmental Protection regarding this landfill. However, after subsequent discussions among representatives of the community, the state environmental agency, and the facility, an agreement was reached and the complaint was withdrawn. While Austin is considered the home of the Texas Industries Austin Package Plant, which was included in our study, the plant is located outside of the city. The plant produces packaged products that include various types of concrete, mortar, sand, cement and asphalt mixes. It primarily sells its products to construction companies in the southwestern United States. The Garden Valley Neighborhood Association—which represents a largely minority, residential community close to the plant—filed a complaint with EPA against the Texas Natural Resources Conservation Commission regarding the concrete plant. The Georgia Pacific facility has operated in an urban area on the south side of Columbus, Ohio, in Franklin County since 1971. The facility annually produces 110 million pounds of resin as well as 235 million pounds of formaldehyde, which is used in making plywood, particleboard, ceiling tiles, laminates, and other products. On behalf of a community near this facility that is approximately 80 percent minority, Alum Crest Acres Association, Inc., and South Side Community Action Association filed a complaint with EPA concerning the permit issued for this facility by the Ohio Environmental Protection Agency and the City of Columbus. Staff members who made key contributions to this report were Gwenetta Blackwell-Greer, Emily Chalmers, M. Grace Haskins, Tina Kinney, Tina Morgan, and Paul Thompson. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to daily E-mail alert for newly released products” under the GAO Reports heading.
Industrial facilities that operate under permits regulating some emissions and discharges have been the subject of complaints from community groups and environmental activists who charge that the facilities expose the surrounding communities to greater environmental risk than the general population. In response, the facilities point out that they contribute to the economic growth of the surrounding communities by employing residents and supporting other community needs, such as schools and infrastructure. In a survey of selected facilities, GAO found that the number of jobs in some decreased over time. According to facility officials, these jobs included unskilled, trade, technical, administrative, and professional positions with salaries ranging from $15,000 to $80,000 per year. Most of the facilities identified other contributions that they had made or planned to make in the local communities. These included volunteer work such as organizing cleanups; infrastructure improvements such as installing a new water drainage system; and financial assistance to schools, universities, community groups, and other organizations. Property values in a community are affected by many factors, including the condition of the land and houses, the proximity of the property to natural or man-made structures--such as the facilities covered by this study--that might be viewed as desirable or undesirable, and economic conditions in the surrounding or adjacent communities. Information on property values was unavailable for most of the communities and facilities studied. In these locations, community groups voiced concerns that the facilities would cause property values to decline. Officials at 6 of the 15 facilities GAO studied said they had used available incentives or subsidies. The incentives varied, depending on the type of facility and its location, but included tax exemptions, a local bond initiative, reductions in regulatory fees, and reduced utility rates.
Airline travel is one of the safest modes of public transportation in the United States. The current level of airline safety has been achieved, in part, because the airline industry and government regulatory agencies have implemented rigorous pilot training and evaluation programs. The major airlines have training programs for pilots that focus on, among other things, maintaining flying skills, qualifying to fly new types of aircraft, and acquiring skills in dealing with emergencies. FAA’s original regulations for the airlines’ general training programs—referred to in this report as part 121—spell out the number of hours of training required in particular areas, such as the time spent practicing emergency procedures. Effective for 1996, FAA instituted a requirement for CRM training under part 121 that states the following: “After March 19, 1998, no certificate holder may use a person as a flight crewmember, and after March 19, 1999, no certificate holder may use a person as a flight attendant or aircraft dispatcher unless that person has completed approved crew resource management or dispatcher resource management initial training, as applicable, with that certificate holder or with another certificate holder.” FAA believes that this training should improve flight crews’ performance. As an alternative to training under these regulations, airlines may apply to participate in AQP. Eight of the 10 major airlines have applied to, and been approved for participation in, AQP. Unlike traditional part 121 training, AQP specifies the criteria for the required level of performance in certain types of maneuvers, rather than hours of training, and it integrates CRM training with technical flying skills. The airlines are expected to fully implement AQP over a period of time, up to 8 years. Full implementation means that the airlines have trained their pilots for each type of aircraft they fly. Training, however, occurs only after the airline has gone through three other stages: (1) getting approval to participate in the program, (2) developing a training curriculum, and (3) training instructors. Continuing crew training, the last stage, is to occur annually. Responsibility for AQP and traditional part 121 training rests with different FAA branches. The AQP Branch within the Office of Flight Standards Services oversees AQP, and the Branch expects to transfer many of its oversight responsibilities to inspectors in the field as each airline fully implements its AQP. The administration of traditional part 121 training is divided between the Air Carrier Training Branch, which sets training requirements, and the flight standards inspectors in the field, who are responsible for overseeing the training. FAA’s inspectors periodically review and approve airlines’ curricula and training materials and observe training. CRM is a “human factors” approach for improving aviation safety by preventing or managing pilots’ errors. Human factors refers to a multidisciplinary effort to develop information about human capabilities and limitations and to apply this information to equipment, systems, facilities, procedures, jobs, environments, training, staffing, and personnel management for safe and effective human performance. Under this approach, pilots are trained to recognize potential mistakes in judgment or actions and to compensate for them to prevent accidents and incidents. For example, in training for initial departure, CRM training has the captain practice briefing the crew about the actions to be taken if the takeoff must be aborted because of an emergency. CRM also teaches the crew to question orders when they believe they have information that indicates these orders are inappropriate. Similarly, CRM training teaches the crew to anticipate problems and make decisions that take these anticipated problems into account. About 30 percent of the 169 accidents and 18 percent of the 3,901 incidents that occurred from 1983 through 1995 were caused at least in part by pilots’ performance, according to our analysis of the National Transportation Safety Board’s (NTSB) and FAA’s data. Furthermore, the accident data indicate that nearly one-third of the accidents occurred because the pilots either did not follow, or did not correctly follow, CRM principles. The most frequently occurring accidents and incidents included collisions on the ground with objects and other airplanes, flights through turbulent weather that resulted in injuries, and deviations from flight paths that had the potential to cause an in-flight collision. On the ground, pilot performance was associated most frequently with airplanes colliding with vehicles, buildings, other equipment, or animals. This was the case for both accidents (32 percent) and incidents (34 percent). Figure 1 shows the types of accidents and incidents on the ground reported from 1983 through 1995. Loss of control on ground NTSB cited 62 events associated with pilots’ performance in 169 accidents. FAA cited 446 events associated with pilots’ performance in 3,901 incident reports. In the air, pilot performance was most frequently associated with injuries to passengers and flight attendants during turbulent weather—41 percent of accidents and 12 percent of incidents. Figure 2 shows the types of accidents and incidents in the air that were reported. In addition to the accidents and incidents discussed above, FAA maintains data separately for those occasions on which pilots failed to comply with the air traffic controller’s instructions—such as not staying on the directed flight path and/or entering a runway without clearance. Of the 1,471 unauthorized maneuvers from 1987 through 1995, 80 percent occurred in the air, and most of these (73 percent) occurred when pilots did not maintain their assigned altitude levels. The unauthorized pilot maneuvers on the ground were most often (69 percent) associated with pilots’ moving airplanes onto runways without authorization from the air traffic control tower. These types of incidents have the potential to cause accidents. For example, the December 1990 crash at the Detroit Metropolitan Airport occurred when an airplane taxied onto a runway being used for takeoff by another airplane and collided with that airplane. Twelve people died. The first plane had not gotten permission from the control tower to enter this runway, as it should have. Figure 3 shows the most frequently reported unauthorized pilot maneuvers in the air and on the ground. In our analysis of accidents, we found deficiencies in the airline pilots’ use of CRM in nearly one-third of all accidents involving pilots’ performance. Moreover, we found CRM deficiencies in half of the serious accidents in which there was at least one fatality. About 46 percent of these CRM deficiencies involved a lack of coordination among members of the cockpit crew, as well as the captain’s failure to assign tasks to other crew members and to effectively supervise the crew. Generally, these CRM deficiencies illustrated the importance of effective communication. For example, in the Charlotte, North Carolina, crash in July 1994, communication among crew members did not occur, according to NTSB’s accident investigation report. NTSB believes that the captain, who was not flying the aircraft at the time and could not see the ground because of poor visibility, became disoriented and commanded the first officer, “down, push it down,” even though they were encountering windshear, which is a sudden change in wind direction. The first officer did not question the order, as he should have, according to NTSB, because the windshear was creating an unstable situation; the plane could not recover from the sudden downward shift in direction caused by following the captain’s order. The plane crashed nose down into the ground, and 37 people died. Similarly, in a June 1984 accident in Detroit, Michigan, a lack of communication between the crew and air traffic controllers during a landing in a severe thunderstorm contributed to the accident, according to the NTSB report. The crew did not request clarification about the weather conditions or change its course of action to take these conditions into account. The winds associated with the storm forced the plane down precipitously, causing an emergency landing without the landing gear’s being fully extended. The plane skidded off the runway, causing serious damage to the aircraft and an emergency evacuation of the passengers. NTSB reported that the lack of CRM practices was a probable cause of the accident. The National Aeronautics and Space Administration reported similar results in its analysis of pilot reports submitted to its voluntary reporting system. Nearly half of the reports cited deficiencies in the pilots’ use of CRM principles; about 53 percent of the CRM deficiencies concerned coordination among members, assignment of tasks, and crew supervision. For AQP training, FAA has specified the process airlines need to follow to develop and implement a curriculum that integrates CRM concepts with technical flying skills, but FAA’s guidance for CRM training under part 121 does not have the same degree of specificity. As a result, inspectors overseeing training under part 121 do not have standards they can use to evaluate airlines’ CRM training curriculum and the delivery of that training. Generally, inspectors could not use the guidance provided under AQP to evaluate part 121 training for the CRM curriculum because the curricula developed under the two programs differ significantly. As a result, airlines continue to need specific guidance for CRM under part 121—both those airlines that have opted not to enter AQP as well those that will continue to train at least some of their crews under part 121 until they have fully implemented AQP, which could take up to 8 years. Once an airline elects to participate in AQP, it must follow SFAR 58 (the AQP regulation) for developing a formal curriculum—including assessing the skills pilots need to safely operate the aircraft they fly, developing curriculum objectives for teaching those skills, having measurable criteria for evaluating whether the pilots have achieved those objectives, and developing materials to teach those objectives. FAA must approve this curriculum. Furthermore, AQP requires all airlines to train their pilots in simulators so that they gain experience with a number of emergency situations. Finally, airlines must submit data to FAA demonstrating that their crews have mastered the skills they need to fly for those airlines. In developing its AQP curriculum, an airline is required to integrate CRM training into every aspect of its crews’ training. As a result, the pilots trained under AQP are assessed on CRM principles as well as on technical flying skills. For example, when a pilot changes the aircraft’s altitude—a technical flying skill—CRM principles dictate that this pilot inform the other pilot by verbally announcing the new altitude while continually pointing to the altitude indicator until the other pilot also points to the altitude indicator and repeats the new altitude. This procedure is used to ensure that neither pilot will fail to maintain the appropriate altitude. In contrast, FAA’s requirements for CRM training under part 121 do not require airlines to develop a curriculum for CRM training with measurable criteria or to integrate that curriculum with other aspects of part 121 training. For the CRM curriculum under part 121, FAA provides suggested training topics but does not clearly lay out how the airlines are to introduce these topics into their training programs, according to airline officials and FAA inspectors. For example, FAA recommends that airlines train crews in “workload management and situational awareness.” For this training, FAA suggests such topics as “preparation/planning/vigilance” and “workload distribution/distraction avoidance.” However, for those airlines that choose to integrate these topics with technical flying skills, FAA does not explain how the airlines are to do so. The lack of specificity in FAA’s guidance for the development of a CRM curriculum under part 121 contrasts with the detailed guidance FAA provides for the development of a curriculum on technical flying skills. For example, FAA’s guidance on how pilots are to respond to windshear under part 121 directs them in a number of technical flying skills, such as how to handle the rudder, but it is silent on how to employ CRM principles in this situation. In contrast, under AQP, FAA’s guidance instructs the airlines to specify not only the technical skills but also the CRM principles that must be applied in a windshear situation. Because FAA’s guidance on CRM training under part 121 is less specific, airlines vary in how they deliver their CRM training. While all the airlines provide classroom training in CRM principles under part 121 training, they may not integrate this training with technical flying skills. For example, airlines may (1) train pilots in technical flying skills in flight simulators without integrating CRM principles or (2) integrate CRM principles with technical flying skills in flight simulators. Generally, we found that CRM training had been integrated with technical flight training to a higher degree at those airlines that were in later phases of AQP implementation. In developing AQP, FAA incorporated procedures for evaluating CRM training and developed a process for ensuring that FAA inspectors would have the criteria they need to conduct the evaluations for pilots’ training on different types of aircraft. Specifically, AQP provides a systematic way of identifying the tasks and subtasks involved in a particular phase of flight. Therefore, an inspector observing the training program can determine whether CRM principles are being invoked in a given flight situation. For example, when a crew is preparing for landing, AQP specifies that the first officer, if unsure of the planned course of action in the event of a missed approach, is to ask the captain to clarify the plan so that both have a full understanding of the actions they will take. Similarly, if a flight has to be diverted from one airport to another, the captain is to direct the first officer to (1) get out the maps for the alternate airport, (2) notify the flight attendants, and (3) make the announcement to the passengers. This delegation of tasks allows the captain time to handle radio contact with the airline’s dispatchers and air traffic controllers, obtain weather updates at the alternate airport, and fly the plane. In the early stages of AQP implementation, the AQP Branch is evaluating airlines’ training. FAA will transfer this responsibility to inspectors in the field as airlines fully implement AQP. Field inspectors will be trained in evaluating the CRM training as an integral part of their evaluation of AQP training. The inspectors at those airlines that had progressed beyond the initial phases of AQP noted that they had received AQP training at the airlines for which they were responsible. Moreover, all of the inspectors we spoke with maintained that while certain facets of AQP were fixed, some parts were still evolving. As a result of the program’s flexibility and evolution, the inspectors pointed out that it was not possible to structure a training program for them that could cover every aspect of AQP at every airline. Despite this fluidity, these inspectors said that the AQP Branch Office made sure that the program’s standards were maintained across airlines. While the evaluation of the delivery of CRM training is incorporated into the oversight process for AQP training, it is not under traditional part 121 training. Moreover, FAA has not provided its inspectors with any specific guidance or training for evaluating airlines’ CRM training under part 121. Although FAA inspectors may obtain some CRM training from a 3-hour computerized interactive course, this lack of guidance for evaluating CRM training under part 121 is troublesome to the inspectors we spoke with because of what they view as an inherent conflict between performance expectations for individuals under part 121 and crew performance expectations articulated in CRM principles. Under part 121, pilots are to master technical flying skills and perform these skills without reliance on any other crew member. In contrast, CRM principles and training teach pilots how to use to maximum effect the abilities and experience of other crew members, as well as their own technical flying skills. Without formal FAA instructions, inspectors have developed their own approaches to this evaluation. For example, one inspector said that he based his approval on his belief that the airline for which he was responsible “had a good safety record” and “would probably establish a good program.” Another inspector said that in approving any training program, he sought guidance first from any applicable federal aviation regulation; the Inspector’s Handbook, applicable advisory circulars; and, finally, any other FAA publication, such as the Introduction to CRM Training. However, this inspector added that these sources did not provide the criteria he needed to evaluate CRM training. As a result, he looked for behaviors such as crew members’ “working together” to resolve problems, “catching errors,” or “dealing with the consequences resulting from uncaught errors.” These ad hoc approaches to evaluating the delivery of CRM training are not satisfactory to FAA officials at headquarters or to officials for at least one airline. FAA officials told us that the agency needed additional CRM training for its inspectors conducting reviews under part 121. In addition, officials at one airline told us that the lack of specific guidance and training for FAA inspectors responsible for evaluating CRM training under part 121 has hampered FAA’s ability to review CRM programs. Furthermore, the problems FAA inspectors face in evaluating CRM training under part 121 will continue indefinitely in the absence of clearer guidance from FAA for those airlines that have decided not to enter AQP and for those airlines in the program that have not fully implemented it. Because AQP is implemented by the type of aircraft the crew flies, even the airlines that have been accepted for AQP will continue to provide some CRM training under part 121. For the eight airlines implementing AQP, we estimate that only about one-third of their crews have begun to receive AQP training. Therefore, most crews are still receiving traditional training under part 121, and some will continue do so for up to 8 years. As of September 1997, the airlines’ estimated dates for completing the transition to AQP training ranged between 2000 and 2005. (See table 1.) For the flying public, safety is the paramount issue, and FAA and the airlines have worked to provide rigorous training programs for pilots. Crew resource management, which focuses on making the best use of all available experience and skills in the cockpit, is increasingly seen as an important component of safe flights. FAA recognized the importance of crew resource management by requiring all airlines to include training in these principles and by incorporating crew resource management into its Advanced Qualification Program. Pilots’ performance is not the only factor in airline accidents, but it is an important one. We identified pilots’ performance as the cause of about one-third of all the accidents and nearly one-fifth of the incidents for the 10 major airlines from 1983 through 1995. Training for safer performance by pilots that teaches crew resource management can occur under either the Advanced Qualification Program or part 121. However, while FAA’s guidance for the implementation of the Advanced Qualification Program specifies a process for curriculum development that integrates this training with training in technical flying skills, FAA’s guidance for curriculum development under part 121 is ambiguous and does not provide standards that inspectors can use to evaluate and approve airlines’ training in crew resource management. As a result, FAA cannot be assured that airlines are developing a curriculum for teaching crew resource management that will effectively teach pilots how to best use all the skills and experience available to them in the cockpit. Furthermore, without specificity in the development of training for crew resource management under part 121 and without any guidance on how to evaluate this training under part 121, FAA inspectors are relying on their own experience in observing pilots or even on the belief that the airline “would probably establish a good program.” These problems are especially troublesome because pilots who have not completed FAA-approved crew resource management training by March 1998 may not fly for airlines. To help ensure that airlines appropriately train pilots in CRM principles under part 121 and that FAA inspectors are able to uniformly evaluate this CRM training, we recommend that the Secretary of Transportation direct the Administrator of FAA to develop a process that airlines must follow for creating a CRM curriculum, with measurable criteria, under part 121 as it has for the Advanced Qualification Program. We provided a draft of this report to FAA for review and comment. We met with the Deputy Associate Administrator for Regulation and Certification, the Deputy Director of Flight Standards Services, the Managers for the Air Carrier Training Branch and the Advanced Qualification Program, and other officials. FAA commended our review of CRM training at the nation’s airlines. FAA accepted the report’s recommendation in part. FAA agreed that it should ensure that pilots are appropriately trained and noted that CRM training can provide desirable consequences in aviation safety. It further agreed that uniform evaluation of CRM training using measurable criteria is a commendable objective. However, FAA stated that science has not yet developed valid, reliable criteria for measuring CRM performance. FAA also agreed that more can be done to develop a process that airlines and inspectors can follow to create a CRM curriculum. FAA indicated that better guidance would be provided in a number of ways, such as updating Advisory Circular 120-51, Crew Resource Management Training, and supplemental guidance for inspectors included in the inspectors’ handbook and holding regional meetings with CRM specialists from Flight Standards Services and other organizations. We concur with FAA that CRM training for pilots could improve aviation safety. However, we believe that before the contribution of CRM training to aviation safety can be measured, it is necessary to determine the extent to which the delivery of CRM training for pilots has occurred. We further concur with FAA that more should be done to develop processes for airlines and inspectors to follow in creating a CRM curriculum. We believe that until FAA establishes a process for CRM curriculum development that includes an assessment of the extent to which pilots have mastered that curriculum, it will not be possible to measure CRM’s performance in contributing to aviation safety. To determine the extent to which inadequate performance by pilots was a problem for the 10 major U.S. airlines, we examined the types and frequency of safety-threatening events—incidents and accidents—from 1983 through 1995. To determine the adequacy of FAA’ s guidance for and oversight of pilots’ training, we reviewed FAA’s role in the airlines’ implementation of CRM. We focused primarily on CRM training because FAA has described the failure to apply CRM principles as a more important contributing factor in accidents than technical flying skills. We also compared FAA’s rules and regulations and other guidance for CRM training with that provided for other training programs, as well as interviewed FAA and airline officials. A detailed discussion of our methodology is presented in appendix I. Related GAO products are listed at the end of this report. Our work was performed from October 1996 through October 1997 in accordance with generally accepted government auditing standards. As arranged with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will provide copies of the report to the Secretary of Transportation, appropriate congressional committees, and other interested parties. We will also make copies available to others upon request. Please contact me at (202) 512-3650 if you or your staff have any questions about this report. Major contributors to this report are listed in appendix II. To identify the types and frequencies of accidents and incidents—safety-threatening events—related to pilot performance, we reviewed accident and incident data, including pilot deviations, contained in the National Transportation Board’s (NTSB) and the Federal Aviation Administration’s (FAA) electronic databases. We obtained these data from FAA’s National Aviation Safety Data Analysis Center. We limited our review to the reported events in accident and incident data sources involving the 10 major U.S. passenger airlines from 1983 through 1995. We did not independently verify these data. To facilitate the comparison of accidents with incidents in our analysis of the types and frequencies of safety-threatening events, we made two adjustments to the data. First, because of differences in the way information is recorded in these databases, we matched the similar categories contained in both databases and used these categories in our analysis. For example, both NTSB’s and FAA’s databases contain the category “on ground collision with object,” which means an airplane struck an object, such as a vehicle or structure, while moving on the ground. Second, because the occurrences of events in accidents closely conform to those in incidents, we used the events that occurred in each of the 169 accidents as our unit of analysis. In our analysis of crew resource management (CRM) deficiencies, we used the accident as the unit of analysis because NTSB’s findings of CRM deficiencies were by accident and not by the individual events that occurred within accidents. To characterize the prevalence of pilot performance as a factor in safety-threatening events over time and between airlines, we examined FAA’s incident and pilot deviation databases. We used these two databases because they are the only such sources with adequate numbers of observations to make such comparisons. To determine the extent to which the inadequate use of CRM by pilots contributed to accidents and incidents, we performed a content analysis of the textual information found in the factual reports, briefs, and final reports of the 169 accidents investigated by NTSB from 1983 through 1995. We then classified CRM deficiencies according to the classification framework presented at a National Aeronautics and Space Administration (NASA)/Ames workshop in 1980. This framework groups CRM issues into five broad clusters: (1) Resource management—the application of specialized cognitive skills to effectively and efficiently utilize available resources, such as the ability to plan, organize, and communicate. (2) Organization processes—crew members’ actions and behaviors in the context of their assigned duties and expected responsibilities. (3) Personal factors—the knowledge, skills, abilities, and limitations that individual crew members bring with them to the cockpit. (4) Material resources internal to the aircraft—the cockpit crew’s appropriate, effective, and efficient use of instructional items, such as checklists, and navigational charts and equipment, such as on-board weather radar, navigational controls, and engine fire extinguisher. (5) Resources external to the aircraft—those people (air traffic controllers), entities (airports), and circumstances (emerging poor weather) that may affect pilots’ plans, decisions, and actions. Table I.1 shows the classification framework used to categorize CRM issues. To verify the results of our content analysis, we requested a similar analysis by NASA’s Aviation Safety Reporting System (ASRS) staff of voluntarily submitted pilot reports contained in the ASRS database. According to the aviation experts we consulted, ASRS incident reports provide the best source of information on deficiencies in CRM. Furthermore, because ASRS staff are most familiar with the data and have expertise in analyzing this free-form data, we concluded that it was more appropriate for them to perform this analysis. To evaluate the adequacy of FAA’s oversight of airline pilot training, we obtained FAA’s training policies, requirements, guidance, and handbooks relevant to CRM training. We discussed training programs, including CRM, and training procedures with appropriate FAA officials, including officials in the Office of System Safety, the Office of Regulation and Certification’s Flight Standards Services, the Advanced Qualification Program Branch, the Office of Accident Investigation, and the Human Factors Division. In addition, we discussed airline training evaluation and approval processes and obtained training documents from FAA inspectors responsible for monitoring airline training. Finally, we contacted safety directors and trainers at the major airlines and obtained documents on their policies, procedures, research, and training curricula. We requested comments from recognized experts in the field of human factors in academia and the aviation industry, pilots, and government officials from FAA, NTSB, and NASA. We incorporated their comments where appropriate and made adjustments to our methodology as warranted. Aviation Safety: New Airlines Illustrate Long-Standing Problems in FAA’s Inspection Program (GAO/RCED-97-2, Oct. 17, 1996). Human Factors: Status of Efforts to Integrate Research on Human Factors Into FAA’s Activities (GAO/RCED-96-151, June 27, 1996). Military Aircraft Safety: Significant Improvements Since 1975 (GAO/NSIAD-96-69BR, Feb. 1, 1996). Aviation Safety: Data Problems Threaten FAA Strides on Safety Analysis System (GAO/AIMD-95-27, Feb. 8, 1995). Aviation Safety: Unresolved Issues Involving U.S.-Registered Aircraft (GAO/RCED-93-135, June 18, 1993). Aviation Safety: Changes Needed in FAA’s Service Difficulty Reporting Program (GAO/RCED-91-24, Mar. 21, 1991). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO examined the role of airline pilots' performance in accidents and the Federal Aviation Administration's (FAA) efforts to address any inadequate performance, focusing on the: (1) types and frequency of accidents in which an airline pilot's performance was cited as a contributing factor, including those in which failure to use crew resource management (CRM) principles was identified; and (2) adequacy of FAA's guidance for and oversight of the airlines' implementation of pilots' training for CRM. GAO noted that: (1) of the 169 accidents that involved the major airlines and that were investigated and reported on in detail by the National Transportation Safety Board (NTSB) from 1983 through 1995, about 30 percent were caused in part by the pilots' performance; (2) in at least one-third of these accidents, GAO determined that the pilots did not correctly use CRM principles; (3) for example, according to NTSB, just before the 1994 crash in Charlotte, North Carolina, which killed 37 people, the aircraft had encountered a sudden change in wind direction and the captain gave an incorrect order to the first officer, who did not question the order, as CRM principles would require; (4) during the same period, of the nearly 4,000 incidents, GAO found that about one-fifth were caused in part by the pilots' performance; (5) FAA's guidance for and oversight of training in CRM does not ensure the adequacy of this training under part 121 of the federal aviation regulations, while they do under the new Advanced Qualification Program (AQP); (6) FAA's guidance for the implementation of AQP specifies a process for curriculum development that the airlines must follow in order to integrate CRM training with technical flying skills; (7) FAA inspectors overseeing this training assess the curriculum to see if FAA's process has been followed, enabling them to determine whether the pilots' training under this curriculum is adequate; (8) although FAA requires airlines to teach CRM in their traditional part 121 training, the guidance it provides on how to develop the curriculum for this training is ambiguous and does not provide standards that inspectors can use to evaluate airlines' CRM training; (9) because AQP training generally differs from traditional part 121 training in how it develops a curriculum for training CRM, the guidance for this training in AQP may not be applicable to CRM training under part 121; (10) FAA needs to develop guidance for teaching CRM under traditional part 121 training; (11) and although 8 of the 10 major airlines plan to train all their pilots under AQP, the need for guidance on CRM training under part 121 remains--both for those airlines that have opted not to enter AQP as well as for those that participate in the program but will nonetheless continue to have some of their pilots trained under part 121 for up to 8 years as they make the transition to AQP.
Over the past two decades, extensive research and development have led to new prescription drug therapies and improvements over existing therapies, and the number of prescription drugs on the market has increased dramatically. Some of these therapies can at times replace other health care interventions, and as a result, the importance of prescription drugs as part of health care has grown. Consequently, Americans are using a greater number of pharmaceuticals than ever before. According to the National Institute for Health Care Management, pharmacists dispensed 3.1 billion prescriptions in the United States in 2001, up from 1.9 billion in 1992 and 2.4 billion in 1997. In addition to ensuring that new drugs and biologics are safe and effective and that applications for their approval are reviewed timely, FDA is also responsible for monitoring drugs and biologics for continued safety after they are in use. Within FDA, CDER and CBER are responsible for reviewing applications for new drugs and biologics, respectively. The centers also are responsible for reviewing efficacy supplements, manufacturing supplements, labeling supplements, and investigational new drugs. Efficacy supplements are applications for new or expanded uses of already approved products, including addition of a new indication, a change in the dosing regimen such as increase or decrease in daily dosage, or a change in the patient population. Manufacturing supplements to new drug applications are used to notify the centers in advance of certain drug manufacturing changes. Investigational new drug applications are submitted for new drugs or new indications for already approved drugs that are to be used in clinical investigations. The review process for both centers requires evaluating scientific and clinical data submitted by manufacturers to determine whether the products meet the agency’s standards for approval. The first decision a center must make in its review process is whether to accept a new drug application (NDA) or biologics license application (BLA). FDA can issue one of several action letters. If the application is not sufficiently complete to allow a substantive review, the center issues a “refuse-to-file” letter. Once the center has accepted the application, it designates the product as either “priority,” for products that would provide significant therapeutic gains compared to any existing products on the market, or “standard,” for products that would provide no significant therapeutic advantage over other drugs already on the market. After a thorough assessment of the information in the application and any supplemental information requested, the center decides whether to approve the drug based on the product’s intended use, effectiveness, and the risks and benefits for the intended population. All medical products are associated with some level of risk, and a product is considered safe if its risks are determined to be reasonable given the magnitude of the benefit expected. For decisions on drugs, CDER may approve the product for marketing (in an “approval letter”) or it may indicate (in an “approvable letter”) that it can approve the drug if the sponsor resolves certain issues. Alternatively, it may issue a “nonapprovable letter” that specifies the issues that make the application ineligible for FDA approval. The review process is similar for biologics; however, CBER issues a “complete response letter” that specifies all outstanding issues that would need to be addressed by the sponsor to be considered for FDA approval. The review process may consist of more than one review cycle. The first review cycle begins when an NDA or a BLA is initially submitted to FDA, and it ends when FDA has completely reviewed the application and issued some form of an action letter. If the application is approved in the first cycle, the “approval time” is recorded as the length of that cycle. The next cycle of review, if necessary, begins when the application is resubmitted to FDA. If the review process takes two or more cycles to reach approval, the length of the approval time is recorded as the total of the length of the review cycles plus any subsequent time during which a sponsor is addressing the issues raised by FDA. Under PDUFA, companies pay three types of user fees to FDA— application fees, establishment fees, and product fees. In most cases, a company seeking to market a new drug or biologic in the United States must pay an application fee to support the agency’s review process. Generally, companies also pay an annual establishment fee for each facility in which their products subject to PDUFA are manufactured and an annual product fee for marketed drugs for which no generic versions are available. FDA is expected to use funds received under PDUFA to meet certain performance goals. Under the framework established by PDUFA, FDA works with various stakeholders, including representatives from consumer, patient, and health provider groups and the pharmaceutical and biotechnology industries, to develop performance goals. The Secretary of Health and Human Services (HHS) then transmits these goals in a letter to the Congress. Under PDUFA I, the performance goals applied to length of review time; the performance goals in PDUFA II further shortened the review time and added new performance goals associated with reviewer responsibilities for interacting with the manufacturer, or sponsor, during drug development. For example, PDUFA II required FDA to schedule meetings and respond to various manufacturer requests within specified time frames. To collect and spend user fees under PDUFA I, each year FDA had to spend from its annual appropriation for salaries and expenses at least as much, adjusted for inflation, on the human drug and biologic review process as it had spent on for this process in fiscal year 1992. Under PDUFA II, each year FDA has to spend at least as much, adjusted for inflation, as it did in fiscal year 1997. The user fees collected under PDUFA cover only those CDER or CBER activities that are included in the human drug review process. The fees do not fund other CDER or CBER activities and do not fund the programs of the other FDA centers, that is, the Center for Food Safety and Applied Nutrition, Center for Veterinary Medicine, Center for Devices and Radiological Health, and National Center for Toxicological Research. FDA designates the programs of these centers as non-PDUFA programs or other activities. PDUFA has provided FDA with additional resources that have helped the agency make new drugs available to the U.S. health system more quickly, but biologic approval times have varied. FDA has used PDUFA funds to increase the number of medical and scientific reviewers to assess the applications for new products by about 77 percent. Since 1993, FDA median approval times for standard drugs decreased from about 27 months in 1993 to about 14 months in 2001. However, in recent years, median approval times for standard NMEs have increased. In contrast, median approval times for biologic applications have fluctuated since 1993, ranging from a low of 12 months to a high of about 32 months. In all but 2 years since 1993, approval times for biologics have been longer than for drugs. For example, in 2001, the median approval time for biologics was about 22 months, while median approval times for priority and standard drugs were about 6 months and 14 months, respectively. The fluctuation in BLA approval time is due, in part, to the small number of submissions each year. Since the implementation of the PDUFA program, user fees have grown steadily and represent an increasing share of FDA’s funds for the review of new drug and biologic applications. From fiscal year 1993 through fiscal year 2001, FDA obligated $825 million from user fees for the drug and biologic review processes, in addition to $1.3 billion from its annual appropriation for salaries and expenses (see fig. 1). While user fees funded 7 percent of drug and biologic review obligations in fiscal year 1993, user fees accounted for nearly 50 percent of the total funds obligated for the drug and biologic review processes in fiscal year 2001. In fiscal year 2002, FDA expects to obligate about $170 million in user fees, or 51 percent of the $332 million that FDA expects to spend on its drug and biologic review processes. From fiscal year 1993 to fiscal year 2001, user fees allowed FDA to increase the personnel assigned to review new drug and biologic applications from about 1,300 to about 2,300 full-time equivalents (FTE), an increase of about 77 percent. Despite the growth of user fees, user fee revenues under PDUFA II fell short of FDA’s estimates, while reviewer workload increased. FDA’s estimate of how much the agency would receive from user fees fell short because FDA received fewer submissions than expected. From fiscal year 1998 through fiscal year 2002, FDA collected about $57 million less in user fees that it initially estimated. At the same time, the workload of FDA reviewers increased under PDUFA II. As a result, during the last 2 years of PDUFA II, FDA had to spend unobligated user fees that had been carried over from previous years to maintain its reviewer workforce. Under PDUFA III, FDA will be better able to ensure the stability of user fee revenues. Overall, the median approval time for new drugs has dropped since the implementation of PDUFA. From 1993 to 2001, the median approval time for standard new drug applications dropped from about 27 months to about 14 months (see fig. 2). During the same period, the median approval time for priority new drugs also dropped, from about 21 months to about 6 months. Since 1995, approval times for priority new drugs have been relatively constant. While, in general, approval times for new drugs have dropped significantly, the median approval time for standard NMEs, a subset of standard drugs, has increased in recent years. The approval time for standard NMEs reached a low of about 13 months in 1998 before rising to about 20 months in 2000 and 2001. The median approval time for priority NMEs has remained stable at about 6 months since 1997. The median approval time for a biologic application has varied considerably post-PDUFA, although the small number of biologic applications approved in any given year may affect the variation in approval time. The median approval time increased from about 15 months in 1993 to a high of about 32 months in 1995. After dropping to a low of 12 months in 1997, it rose again and was about 22 months in 2001 (see fig. 3). In all but 2 years since 1993, approval times for biologics have been longer than for drugs. Although there has been an overall decrease in the approval times for standard drug applications since the implementation of PDUFA, FDA approval times for standard NME applications (a subset of standard drugs) and biologic applications have increased recently. According to FDA, approval times for these two types of applications went up in 2000 because many of them had to go through several review cycles before they were approved. Multiple review cycles have occurred for several reasons. For example, after its initial review of an application, FDA may ask the sponsor to provide new information, such as new clinical trials or data analyses, to address deficiencies in the initial application. Once the sponsor provides the requested information, FDA undertakes another review cycle to examine the information. Also, if FDA completes its assessment late in the review cycle, it can be difficult to resolve issues with the sponsor before the review decision deadline. In these cases, FDA may issue an approvable letter that advises the sponsor that the application will be approved if certain issues are resolved. Issuing an approvable letter enables FDA to meet its performance goals without making a final decision on the application. It also results in the application going through another review cycle. Both FDA and the pharmaceutical/biotechnology industry have acknowledged that to allow FDA to meet PDUFA review goals, drug and biologic applications are going through more review cycles. While the industry’s goal is to obtain approval of an application, FDA can meet the PDUFA goal by completing its review and issuing an action letter. Our analysis of approvals confirms that an increased proportion of applications are going through several review cycles. A smaller percentage of drugs was approved in the first review cycle in 2001 than in previous years (see fig. 4). For example, in 1998, 54 percent of standard new drugs and biologic applications were approved in the first review cycle. In 2001, 37 percent of standard new drugs and biologic applications were approved in the first review cycle. In response to industry’s concerns, FDA and the pharmaceutical/biotechnology industry have agreed that the agency will notify an applicant of deficiencies identified within a specified time frame after an application is filed with FDA. While an application may be sufficiently complete for FDA to do a substantive review, the purpose of FDA’s communication is to alert a company early to deficiencies in its application that will prevent FDA approval so that it can start addressing them. Additional factors may affect approval times for biologic products. A CBER official stated that the complexity of cutting-edge technology involved in developing and manufacturing biologics, such as gene therapy and bioengineering, may increase approval time. In addition, an FDA official told us that some biotechnology companies have had difficulties demonstrating their ability to consistently manufacture products comparable to those used in their human studies, while others have filed applications with significant clinical and safety issues that had to be resolved. According to a CBER official, the center plans to issue more refuse-to-file letters in such situations at the start of the review cycle to obtain better-quality applications. CBER officials believe that initiating a review of an application that is substantially incomplete, for example, because it omits critical data, or one that raises significant issues is inherently inefficient and extends review time. A refuse-to-file letter alerts a company to corrective actions that need to be taken so that the FDA review of an application proceeds more promptly and efficiently. As part of its performance goals established for PDUFA III, FDA agreed to select and hire an outside consultant in fiscal year 2003 to conduct a comprehensive review and analysis of the drug and biologic review process and make recommendations for improvements. User fees will pay for this review and analysis. FDA anticipates delivery of a report of the consultant’s findings and recommendations in fiscal year 2005. The agency would then consider these recommendations in planning any changes to enhance its performance. While PDUFA has increased the funds available for FDA’s drug and biologic review activities, funds for FDA’s other activities have constituted a smaller portion of FDA’s total budget since implementation of PDUFA. According to FDA officials, two factors may have contributed to the reduced share of FDA funds allocated to other activities. First, PDUFA requires that each year FDA spend increasing amounts from its annual appropriation on the drug and biologic review process in order to collect and spend user fee revenues. According to agency officials, FDA had difficulty determining the amount spent until the end of the year. As a result, FDA spent more than was required. Second, FDA officials said that during fiscal years 1994 through 2001, the agency did not receive sufficient increases in its annual appropriation for salaries and expenses to cover annual pay increases for all employees. To ensure that the agency could meet the spending baseline for the drug review program and fund the pay raises, FDA officials reduced available resources for other activities, such as reviewing over-the-counter and generic products and inspecting medical product manufacturing facilities. Since the enactment of PDUFA, the share of FDA funding and the resources available for other activities have decreased. While spending on FDA’s other activities rose from about $606 million in fiscal year 1992 to about $782 million in fiscal year 2000, the percentage of FDA funds spent on other activities declined from about 83 percent of FDA’s budget in fiscal year 1992 to about 71 percent in fiscal year 2000 (see fig. 5). During the same period, FDA resources allocated to other activities declined from 7,736 FTEs in fiscal year 1992 to 6,571 FTEs in fiscal year 2000, or a decline from about 86 percent of FDA’s FTE resources in fiscal year 1992 to about 74 percent in fiscal year 2000 (see fig. 6). During the same period, the number of FTEs allocated to drug and biologic review activities rose from 1,277 FTEs in fiscal year 1992 to 2,346 FTEs in fiscal year 2000—an increase from 14 to 26 percent of FDA’s total FTEs. According to agency officials, the requirement that FDA must annually increase by an inflation factor the amount it spends on the drug and biologic review processes from its appropriation for salaries and expenses reduces the funds available for other FDA programs. Under PDUFA, if FDA’s spending from its appropriation on drug and biologic review activities falls below the statutory minimum, it cannot collect and spend user fees to review drug and biologic applications. FDA would then have to initiate a reduction-in-force because the agency would not have sufficient funds to pay the salaries of the reviewers. FDA officials stated that it is difficult to determine exactly how much the agency has spent from its appropriation until the end of the fiscal year when a final accounting is completed. Therefore, the agency spends more on drug and biologic review activities than the statutory minimum to ensure that it spends enough to continue the user fee program. In 7 of the 9 years since PDUFA was enacted, FDA has exceeded the spending baseline by from 3 to 10 percent (see table 1). In 1996 and 1997, the overspending was higher, 23 and 18 percent, respectively. According to an FDA official, the higher overspending occurred in those years because the agency was particularly focused on meeting the goals established by PDUFA I and spent additional funds to ensure that it met PDUFA’s performance goals. To the extent that FDA spends more than the minimum amount of its appropriation on drug and biologic review activities under PDUFA, it has less to spend on other activities. As part of PDUFA III, the Congress revised the minimum spending requirement to lessen the potential for the agency to spend more than necessary from its appropriation each year on drug and biologic review activities. Specifically, FDA will be allowed to spend up to 5 percent less than the amount required by law provided that user fee collections in a subsequent year are reduced by the amount in excess of 3 percent that was underspent. According to FDA officials, the agency reduced staffing levels in other centers to cover the costs of unfunded pay raises. From fiscal years 1994 through 2001, FDA paid about $250 million to cover mandatory federal pay raises for which it did not receive increases in its appropriations. FDA officials told us that this situation reduced the agency’s ability to support activities not funded by PDUFA. FDA reduced the staffing levels for non- PDUFA activities each year, leaving the agency fewer resources to perform its other responsibilities. For example, in its budget justification for fiscal year 2002, FDA reported that inspection of medical device manufacturers has decreased and the agency does not routinely inspect the manufacturers of lower-risk products. Although total FDA staffing in fiscal year 2001 was about the same as in fiscal year 1992, about 1,000 more FTEs were allotted to drug and biologic review activities in fiscal year 2001 and about 1,000 fewer FTEs were allotted to other FDA programs that ensure food safety, approve new medical devices such as heart valves and pacemakers, and monitor devices once on the market. Although FDA received a number of funding increases during this period, FDA officials told us that in general those funds could not be used for across-the-board pay increases because almost all funding increases received since 1992 were earmarked for designated programs. FDA officials said that some of the funding increases were for programs related to tobacco, food safety, Internet drug sales, orphan product grants, and dietary supplements. According to FDA, $45.2 million was available to cover pay increases for the agency’s employees in its fiscal year 2002 appropriation. In addition, the President’s budget for fiscal year 2003 includes $28.6 million for pay increases. FDA officials told us that the performance goals added by PDUFA II, combined with PDUFA II’s shortened review timelines, have contributed to a heavy workload for FDA’s reviewers, which has resulted in high turnover and reviewers forgoing training and professional development activities. Our review of FDA data and a recent report by KPMG Consulting found that FDA’s workload under PDUFA has increased. Moreover, our analysis of FDA and OPM data found that FDA’s attrition rates for many of the occupations that are involved in its drug review process are higher than those for other federal public health agencies and the federal government as a whole. In addition, KPMG’s report found that FDA reviewers were not receiving the amount of training FDA considers necessary. According to FDA officials, the agency needs significant and sustained increases in funding to hire, train, and retain its review staff in order to continue meeting PDUFA performance goals, provide quality scientific and regulatory advice to the industry, and avoid further deterioration in retention rates. PDUFA II affected reviewer workload by shortening review times and adding new performance goals to reduce overall drug development time— the time needed to take a drug from clinical testing to submission of a new drug or biologic application. As part of the performance goals established for PDUFA II and transmitted to the Congress, FDA agreed, for example, to complete review of 90 percent of standard new drug applications and efficacy supplements filed in fiscal year 2002 within 10 months—a decrease from the 12-month goal set in PDUFA I for fiscal year 1997. In addition, FDA agreed to complete review of 90 percent of manufacturing supplements within 4 months—a decrease from the 6-month goal in PDUFA I. PDUFA II also established a new set of performance goals intended to improve FDA’s responsiveness to and communication with drug sponsors during the early years of drug development. Specifically, FDA agreed to review a sponsor’s request for a formal meeting and provide written notification to the sponsor of its decision within 14 days; schedule major meetings at critical milestones during drug development within 60 days of request, and all other meetings within 75 days of request; prepare meeting minutes within 30 calendar days of a meeting; respond to a sponsor’s request for evaluation of special protocol designs within 45 days; respond to a sponsor’s complete response to a clinical hold within 30 days; and respond to a sponsor’s appeal of a decision within 30 days. In general, the number of FDA review activities increased in fiscal years 1999 through 2001 because of the performance goals added under PDUFA II (see table 2). Specifically, the increases occurred in the activities related to the requirement that FDA work with drug sponsors in the early phases of drug development. Meeting requests, meetings, and meeting minutes constituted a growing portion of FDA review activities. According to FDA reviewers, the typical meeting between FDA and a sponsor during clinical testing involves 17 reviewers from six disciplines that are typically involved in reviews of new drug and biologic applications—medical officer, chemist, microbiologist, clinical pharmacologist, statistician, and pharmacologist/toxicologist. FDA reviewers estimate that the time requirements for a comprehensive meeting involving all FDA review disciplines assigned to an application can range from about 125 to 545 hours per meeting. For example, reviewers estimated that the total FDA staff time spent reviewing the briefing document submitted by the sponsor as well as reviewing other pertinent documents and consulting with other review team members and consultants ranges from 50 to 290 hours. Reviewers estimated that from about 25 to 90 FDA staff hours are spent interacting with the sponsor in final preparation for the meeting, including requesting additional information from the sponsor and reviewing information submitted, developing the meeting agenda, preparing presentations, and attending the actual meeting with the sponsor, which generally lasts 90 minutes to 2 hours. FDA’s workload was further affected by an increase in the number of applications that did not require payment of user fees, due to PDUFA II’s new exemptions and waiver provisions. Under PDUFA II, FDA could exempt or waive fees for (1) drug sponsors that were small businesses submitting their first applications, (2) drug sponsors submitting supplements for drugs used to treat pediatric illnesses, and (3) drug sponsors submitting applications or supplements for drugs used to treat rare diseases (called orphan drugs). FDA officials told us that the percentage of applications where user fees were exempted or waived was significant, ranging from a low of 19 percent in fiscal year 1999 to a high of 32 percent in fiscal year 2001. The KPMG report on FDA’s drug review costs found that the new performance goals established for PDUFA II have also had a significant impact on reviewer workload. According to the report, the majority of reviewers interviewed reported that the new performance goals for meetings with drug sponsors were burdensome. They said that competing priorities made it difficult to complete all tasks, such as accommodating meeting requests, participating in advisory committee meetings, and answering sponsor questions. Our analysis of FDA’s attrition rates for drug reviewers during the 3-year period following the enactment of PDUFA II found that they were higher than the rates for comparable occupations at other public health agencies and in the federal government as a whole. FDA officials told us that the agency continues to experience high turnover for reviewers because of the high demand for regulatory review personnel in the pharmaceutical industry and the higher salaries that experienced FDA reviewers can obtain in the private sector. Attrition of FDA reviewers has been an ongoing concern for the pharmaceutical and biotechnology industries as well. An independent survey of pharmaceutical and biotechnology companies found a high level of concern about FDA’s turnover in review staff and an increase in concern over a 4-year period. We compared FDA’s attrition rate for the six medical and scientific disciplines that constitute the majority of the agency’s drug review staff with the attrition rates for these disciplines at the Centers for Disease Control and Prevention (CDC) and the National Institutes of Health (NIH) (see table 3). Like FDA, CDC and NIH are public health agencies that employ a highly educated, highly skilled workforce. As the table shows, with the exception of chemists, FDA’s attrition rates for employees in its drug review process are higher than the comparable attrition rates for CDC, NIH, and similar disciplines governmentwide. FDA officials reported that to retain experienced staff with certain skills, they have increased the pay for approximately 250 CDER and CBER reviewers. Specifically, FDA conducted studies of staff turnover and found that toxicologists, pharmacologists, pharmacokinetists, and mathematical statisticians were leaving FDA to work in private industry and academia for higher salaries. Under OPM regulations, FDA is authorized to pay retention allowance of up to 10 percent of an employee’s basic pay to a group or category of employees in such circumstances. Employees with at least 2 years of drug review experience in these 4 occupations were eligible for retention allowances. In addition, 5 medical officers and 1 microbiologist were among review staff that received retention allowances. FDA is also considering offering retention allowances to all of its medical officers. We found that FDA reviewers, particularly those in CBER, did not participate in training and professional development activities to the extent recommended by the agency in fiscal years 2000 and 2001. FDA officials told us that reviewers are forgoing training and professional development activities to ensure that the agency meets PDUFA goals. FDA defines training and professional development activities as time spent attending related training and conferences, whether as a presenter or an attendee; learning the review process for drug applications and labeling under a mentor; preparing educational material, publications, and manuscripts or classroom or seminar-type instruction; and mentoring a new reviewer. FDA reviewers are encouraged to spend about 10 percent of their time in training, professional development, and mentoring activities. According to FDA, other science-based agencies, such as NIH, expect scientists to spend about 20 percent of their time on training and professional development. Using KPMG’s estimate that each full-time FDA reviewer worked 200 days per year, FDA’s 10 percent recommended level of training means that each reviewer would be encouraged to spend 20 days per year in training and professional development activities. Our analysis of FDA data found that reviewers in CDER spent, on average, about 19 days in training and professional development activities in fiscal years 2000 and 2001. However, we found that reviewers in CBER spent, on average, about 12 days in training and professional development activities in fiscal years 2000 and 2001. FDA spending for PDUFA-related training and other professional development activities has fluctuated greatly over the past 3 years. Expenditures for PDUFA-related training and other professional development activities in CDER rose from $285,000 in fiscal year 1998 to $796,000 in fiscal year 1999, then dropped to $564,000 in fiscal year 2000. CBER’s expenditures increased from $198,882 in fiscal year 1998 to $206,655 in fiscal year 1999, then dropped to $147,914 in fiscal year 2000, a 26 percent decline from the 1998 level. FDA reviewers, as well as representatives from pharmaceutical and biotechnology companies, are concerned about reviewers’ lack of time for training and professional development. The KPMG report found that reviewers perceived insufficient training to be a major problem. The reviewers interviewed reported that while they wanted to ensure that they were at the cutting edge of medical technology and were able to effectively use workplace tools such as information systems, they believed they had insufficient time to complete training. In addition, an independent survey of pharmaceutical and biotechnology companies found a high level of concern in the industry related to a perceived lack of technical expertise among FDA reviewers. According to the survey, 27 percent of the respondents indicated that reviewer lack of expertise impeded the approval process. That figure increased from a 19 percent rate in the 1997 survey and 17 percent in 1995. Some consumer and patient groups have raised concerns that drug withdrawal rates have increased under PDUFA. Our analysis of FDA data found that the percentage of recently approved drugs that have been withdrawn from the market has risen, but that the size of the increase in drug withdrawal rates differs depending on the period examined. Moreover, several factors may affect drug withdrawals. Some drugs were removed from the market because doctors and patients did not use them correctly, while others produced rare side effects that were not detected in clinical trials. The availability of new, safer treatments also led to some withdrawals. For drugs approved under PDUFA III, FDA may use user fees to support its drug safety efforts. Our analysis of FDA data found that a higher percentage of drugs has been withdrawn from the market for safety-related reasons since PDUFA’s enactment than prior to the law’s enactment. Some consumer and patient groups have expressed concern that PDUFA’s emphasis on faster review times has increased the rate of withdrawals and compromised drug safety by placing FDA reviewers under pressure to approve drugs rapidly to meet performance goals. We identified each drug that was withdrawn from the market from 1985 through 2000, and grouped the withdrawals based on the year in which the drug was approved. We then calculated the drug withdrawal rate—the number of withdrawn drugs as a percentage of those approved each year. We calculated drug withdrawal rates in 4-year intervals over 16 years. As shown in figure 7, the withdrawal rate declined from 1.96 percent for 1989 through 1992 (the 4 years preceding PDUFA) to 1.56 percent for 1993 through 1996 (under PDUFA I), then rose to 5.34 percent for 1997 through 2000 (under PDUFA II). However, the small number of withdrawals in any given year may affect the variation in the withdrawal rate. We also calculated the withdrawal rate with reference to whether the drug was approved in the 8-year period before or the 8-year period after PDUFA was enacted. Grouping the withdrawals in these two periods showed that the withdrawal rate increased slightly after PDUFA (see fig. 8). During the period 1985 through 1992 (pre-PDUFA), FDA approved 193 NMEs. Six of these, or 3.10 percent, were withdrawn for safety-related reasons. During the period 1993 through 2000 (post-PDUFA), FDA approved 259 NMEs, and 9 of these, or 3.47 percent, were withdrawn for safety-related reasons. Several factors may affect drug withdrawals. According to FDA officials, premarketing clinical trials in a few thousands patients (typically with relatively uncomplicated health conditions) do not detect all of a drug’s adverse effects, especially relatively rare ones. In addition, they stated that the rise in the number of newly approved drugs entering the market and the higher consumption of medicines by the population increase the probability of misprescribing, adverse effects, and subsequent drug withdrawals. According to FDA officials, safety problems not detected in clinical trials are more likely to be found first among U.S. patients because they are increasingly first to have access to new drugs. The United States was the first market for 49 percent of new drugs approved in the United States from 1996 through 1998, according to a study. An examination of drug withdrawals, by itself, may not provide a complete picture of drug safety. First, a drug withdrawal does not reflect a judgment concerning the absolute safety of a drug but reflects a judgment about the risks and rewards of a drug in the context of alternative treatments. For instance, despite the documented deaths from liver failure among patients taking Rezulin, the drug was not withdrawn from the market until FDA approved new, safer medications with similar benefits. In contrast, Raxar was withdrawn from the market on the basis of relatively few adverse event reports because alternative treatments were readily available. Second, drug withdrawals may occur because health professionals and patients use the drugs incorrectly, not because the drugs are inherently dangerous when used as approved. For example, the health risks associated with Seldane occurred when the drug was taken in combination with medications that were contraindicated on Seldane’s label. Third, the off-label use of drugs also can be problematic because such use may not have been shown to be safe and effective. For example, while Pondimin (fenfluramine) was approved for short-term use as an appetite suppressant, it was increasingly prescribed and used in combination with the appetite suppressant phentermine as a part of a long-term weight loss and management program. The off-label use of this combination, known as “fen-phen,” posed serious health risks. (See app. I for a list of drugs withdrawn from the U.S. market for safety-related reasons from 1992 through 2001.) PDUFA III authorizes FDA to use user fees for additional drug safety activities that could not be funded by PDUFA I and II user fees. FDA informed the Congress in its performance goal letter for PDUFA III that it will develop guidance documents to assist the industry in addressing good risk assessment, risk management, and postmarketing surveillance practices. As part of joint recommendations to the Congress for the reauthorization of PDUFA, PhRMA and BIO agreed with FDA that the agency should use user fees to fund a new risk management system for newly approved drugs. Under the voluntary program, drug sponsors may develop, and FDA will review, risk management plans for products while the agency reviews the sponsor’s NDA or BLA. By adding FDA’s postmarket safety team to the drug review process before a new drug or biologic is approved, FDA officials believe that they will obtain better information on the risks associated with the product much earlier in the process and the sponsor will gain helpful feedback on how best to monitor, assess, and control the product’s risks. Funding from user fees will be used to implement risk management plans for the first 2 years after a product is approved. For products that require risk management beyond standard labeling, FDA may use user fees for postmarket surveillance activities for 3 years. FDA officials believe that more rigorous safety monitoring of newly approved drugs during the first few years after they are on the market could help to detect unanticipated adverse effects earlier. Historically, the vast majority of adverse effects have been identified in the first 2 to 3 years after a new drug is marketed. FDA anticipates that user fees for risk management will total approximately $71 million over 5 years, and will permit the agency to add 100 new employees to monitor drug safety and track adverse effects from drugs already on the market (see table 4). The implementation of PDUFA has been successful in bringing new drugs and biologics to the U.S. market more rapidly than before. However, maintaining adequate funding for approving new drugs and biologics has had the unintended effect of reducing the share of funding and staffing for other activities. Fewer resources for non-PDUFA programs may affect FDA’s ability to ensure that the other products the agency regulates, such as food and medical devices, comply with FDA safety standards. In addition, PDUFA has increased reviewer workloads and may be a factor in relatively high attrition rates among FDA’s review staff. Rapid FDA approval of new drugs means that the United States has become the first nation to approve many new medicines. Because drugs and biologics are not risk-free, adverse events are to be expected once the products are in the marketplace. As more new drugs and biologics are brought to market, increased attention to postmarket risk management will be even more important. The recent increase in the rate of drug withdrawals also suggests the need for FDA to strengthen its postmarket surveillance activities. Under PDUFA III, FDA will now be able to use user fees for additional drug safety activities, something that was not permitted under PDUFA I and II. By having more resources to review risk management plans developed by drug sponsors and conduct postmarket surveillance, FDA will be able to obtain better information on the risks associated with newly marketed drugs more quickly. We provided FDA with a draft of this report for comment and FDA provided technical comments. In their technical comments, FDA disagreed with our analyses and discussion related to drug withdrawal rates. Specifically, FDA officials said that our analysis of drug withdrawal data comparing the 8-year period pre-PDUFA with the first 8 years after PDUFA does not show any real increase, and that our analysis using the 4-year groupings was significantly affected by the small number of withdrawals during each period. While we agree that the small number of withdrawals in any given year may affect the variation in the withdrawal rate, we believe our analyses are appropriate and both the 8-year and 4-year analyses show an increase in withdrawal rates since PDUFA’s implementation. We incorporated additional technical comments where appropriate. (FDA’s comments are included in app. II). As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 7 days after its issue date. At that time, we will send copies to the Secretary of HHS, the Deputy Commissioner of FDA, the Director of the Office of Management and Budget, appropriate congressional committees, and other interested parties. We will also make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Major contributors to this report were John Hansen, Gloria Taylor, Claude Hayeck, and Roseanne Price. If you or your staff have any questions about this report or would like additional information, please call me at (202) 512-7119 or John Hansen at (202) 512-7105. Omniflox (temafloxacin hydrochloride) Manoplax (flosequinan) Pondimin (fenfluramine hydrochloride) Redux(dexfenfluramine hydrochloride) Seldane (terfenadine) Posicor (mibefradil dihydrochloride) Duract (bromfenac sodium) Hismanal (astemizole) Raxar (grepafloxacin hydrochloride) Rezulin (troglitazone) Propulsid (cisapride) Lotronex (alosetron hydrochloride) Raplon (rapacuronium bromide) Baycol (cerivastatin sodium) 13.8 Bronchospasm 12.0 Rhabdomyolysis (severe damage to skeletal muscle)
Ten years ago, Congress passed the Prescription Drug User Fee Act to speed up the review process used to ensure that new drugs and biological products are safe and effective. GAO found that the act has provided the Food and Drug Administration (FDA) with the funding needed to hire more drug reviewers, which has led to faster availability of new drugs to the United States. Approval times have shortened both for priority drugs--those that FDA expects to offer significant therapeutic benefits beyond drugs already on the market--and standard drugs, which are not thought to have significant therapeutic benefits beyond available drugs. Although the act has increased the funds available for FDA's drug and biological reviews, funds for other activities, such as the regulation of foods and medical devices, have shrunk as a share of FDA's overall budget. The 1997 amendments to the act, which shortened review schedules and set new performance goals to reduce overall drug development time, have increased reviewer workload at FDA. GAO found that some drug reviewers may have forgone training and professional development opportunities to ensure that the new goals were met. FDA officials said that the agency continues to experience high turnover rates among these employees. GAO found that a higher percentage of drugs has been withdrawn from the market for safety reasons since the act was enacted but the that the size of the increase in withdrawal rates differs depending on the period examined. The higher rate of drug withdrawals suggests that FDA needs to strengthen its postmarket surveillance efforts. FDA plans to spend $71 million in user fees during the next 5 years to improve the monitoring of new drugs on the market and to track any harmful effects of these products.
The vermiculite ore mined at Libby, Montana, between 1923 and the early 1990s contained high concentrations of naturally occurring asbestos minerals, including tremolite, winchite, richterite, and others (see fig. 1). As the ore was mined and processed, dust containing asbestos fibers was released into the air, which workers then inhaled. By the early 1900s, asbestos was recognized as a cause of occupational disease. Initially, the disease associated with asbestos was asbestosis, a nonmalignant respiratory disease characterized by scarring of the lung tissue that may progress to significant impairment and death. During the 1930s and 1940s, the connection between asbestos exposure and lung cancer emerged. By 1960, the connection between asbestos and mesothelioma—a cancer of the mesothelial lining of the lungs—was established. Diseases stemming from exposure to asbestos may not be apparent for decades after the initial exposure. Thus, even though the Libby mine closed around 1990, many residents, former workers, and others who were exposed to the asbestos-contaminated ore, recently have been diagnosed with asbestos- related diseases and many more may become ill in the future. EPA’s involvement with Libby’s asbestos-contaminated vermiculite ore dates back to the late 1970s and continued intermittently until 1999, when the agency initiated an investigation that led to ongoing cleanup activities in the Libby area. In 1978, EPA learned that workers at a vermiculite processing plant in Marysville, Ohio—one of hundreds of sites across the United States where Libby vermiculite ore was sent—were exhibiting symptoms of asbestos-related diseases. Between 1980 and 1982, EPA issued a series of reports related to asbestos-contaminated vermiculite. Most of these reports indicated that there was a lack of data on both exposure to asbestos-contaminated vermiculite and its adverse health effects. Further, the reports identified problems in sampling, analysis, and reproducibility of data regarding low levels of asbestos in vermiculite, which made it difficult to acquire data on exposure and health effects. One of the studies also noted that EPA needed to develop more information identifying, among other things, the vermiculite-mine sites, the processors of vermiculite, and the potential number of employees exposed to asbestos-contaminated vermiculite. In a February 1985 report, EPA estimated the levels and ranges of exposure to asbestos-contaminated vermiculite for workers and the general public and indicated that, with further study, this information could be used for regulatory decision making. This report contained a list of the locations of 52 exfoliation plants in the United States that had received vermiculite ore from the Libby mine. Even so, EPA did not initiate any action at the time and, until 1999, did little to address concerns about the health risks associated with exposure to asbestos-contaminated vermiculite ore. In 1999—after a series of newspaper articles reporting that miners and their families in the area of Libby, Montana, had died or were ill from exposure to the asbestos-contaminated vermiculite ore—EPA began investigating the contamination in the Libby area and began cleaning up the contamination in 2000. Subsequently, concerns were raised about why EPA had not taken action much earlier in Libby, which resulted in investigations by both the EPA Office of Inspector General and GAO. The subsequent reports concluded that, due to various challenges, EPA missed past opportunities to take steps that might have protected the citizens of Libby. These challenges included (1) fragmented regulatory authority and jurisdiction with other federal agencies and within EPA, along with ineffective communication, which made it difficult for EPA to take action; (2) limitations of science, technology, and health-effects data that made it difficult for EPA to determine the degree of health risk at Libby; and (3) funding constraints and competing priorities, which led EPA to de- emphasize dealing with asbestos-contaminated vermiculite. Since these reports were issued, as part of an ongoing criminal case against W.R. Grace, the government has alleged that Grace engaged in a conspiracy to defraud EPA and the National Institute for Occupational Safety and Health by concealing and misrepresenting the nature of the asbestos-containing vermiculite produced at the mine. Grace has denied the allegations. When EPA began cleaning up contamination in the Libby area in 2000, it also took steps to identify and evaluate sites that may have received shipments of Libby ore for asbestos contamination according to CERCLA. Under NCP regulations that implement CERCLA, a removal site evaluation involves, among other things, identifying the source and nature of any hazardous-substance release, analyzing the magnitude of the potential threat to human health and the environment, and evaluating factors necessary to make the determination of whether a removal is necessary. According to NCP regulations, when EPA is the lead agency for a cleanup, an EPA region must take certain actions, as appropriate, to notify the public about a removal action. These actions include (1) designating a spokesperson to notify immediately affected citizens and state and local officials about the cleanup; (2) creating a record documenting the basis for the cleanup action and making the record publicly available; (3) publishing a notice that the record is available for review in a major local newspaper; and (4) providing an opportunity for the public to comment on the record. When EPA expects the cleanup action to last more than 120 days, the regional office must also conduct interviews with interested or affected parties, prepare a formal community response plan, and establish at least one local information repository at or near the cleanup location, such as at a public library. EPA has also issued numerous policy directives and guidance documents over the years establishing additional public notification procedures that EPA regions should follow. For example, EPA guidance issued in July 1992 directed regions to interact closely with and reach out to communities. This guidance specifies that one of the goals of public participation is to inform the public about the risks associated with a site and any cleanup actions. The guidance also states that it is imperative for EPA to give the public prompt, accurate information about the nature of threats to public health and the environment, and the removal action necessary to mitigate the threats. In its April 2002 guidance, EPA stated that just complying with NCP provisions is often insufficient for informing the media, the public, and interested stakeholders. This guidance strongly suggested the regions use other options for meeting community needs, such as scheduling press briefings; establishing a local or toll-free telephone hotline; and canvassing neighborhoods to identify residents’ needs, fears, and concerns. ATSDR has provided information to EPA to help assess the risks posed by potential asbestos contamination at selected sites that received Libby ore. Specifically, in 2002, ATSDR launched the first phase of its National Asbestos Exposure Review. Under this phase of the project, ATSDR evaluated human health effects that may be associated with past or current exposure to asbestos at 28 of the sites that had received and processed the vermiculite ore mined in Libby, Montana. These sites were selected because they received a high-volume of Libby ore (greater than 100,000 tons) or EPA identified them as needing further investigation. These 28 sites together received about 80 percent of the vermiculite ore shipped from the Libby mine between 1964 and 1980. EPA, with assistance from other federal and state agencies, has assessed 271 sites that were thought to have received asbestos-contaminated ore from Libby, Montana, to determine if the sites are contaminated with asbestos and if they need cleanup. As a result of these investigations, 19 sites were identified as requiring cleanup. As a part of ATSDR’s effort to evaluate public-health risks posed by past and current exposures to asbestos contamination in the Libby area and at some of the sites that received the Libby ore, ATSDR has noted there is an absence of key information on the toxicity of the asbestos found in the Libby ore. ATSDR also noted that the methods EPA used to sample and analyze the air and soil at most of the 28 sites it reviewed have since been improved and now better quantify asbestos levels. After the EPA Office of Inspector General recommended in December 2006 that EPA perform a toxicity assessment to determine safe levels of exposure for humans, EPA agreed to do so. EPA has taken a number of actions to identify and evaluate sites that may have received Libby ore and, when needed, has conducted removal actions. In early 2000, EPA began compiling a list of facilities that might have received asbestos-contaminated vermiculite ore from the Libby mine. To compile the list, it used shipping records and other information obtained from W.R. Grace as well as historical information about vermiculite processing facilities from the Bureau of Mines and the U.S. Geological Survey. Initially, EPA identified over 500 sites, but after coordinating with the U.S. Geological Survey to update and revise the list of facilities and eliminate duplicate entries, EPA narrowed the list to less than 300 potential sites. The data that EPA collected on the sites believed to have received Libby ore paint a picture of the distribution of Libby ore across the United States. Figure 2 illustrates the nationwide distribution based on 195 sites for which data on the amount of ore shipped were available. These 195 sites are believed to have received a combined total of at least 6 million tons of ore from the Libby, Montana, mine and ore processing operations. The 271 sites were located in 39 states, the District of Columbia, and Puerto Rico. The most sites were in California (28) and Texas (26). EPA has continued to identify sites and will investigate them as it deems necessary. For example, in 2006, EPA identified additional sites (included in the 271) that it needed to assess for asbestos contamination. According to the data that EPA collected, most (95 percent) of the vermiculite ore known to have been shipped from Libby between 1964 and 1990 went to facilities that converted it into commercial vermiculite through a process called “exfoliation” (expansion). Exfoliation plants heated the vermiculite ore to approximately 2,000 degrees Fahrenheit, which caused the ore to expand, or pop. This expanded vermiculite was then used in a variety of products, including loose-fill insulation in homes (see figs. 3 and 4 for photos of expanded vermiculite ore and vermiculite insulation). Because significant concentrations of asbestos fibers were likely released during the exfoliation process, of the facilities that received Libby ore, exfoliation plants were deemed the most likely to have caused environmental contamination and exposure. In performing their preliminary assessment of sites, EPA regions generally tried to determine the facilities’ locations using a variety of methods, including title searches; reviews of town records; and interviews with people who might provide useful information, such as company representatives or people who formerly worked at the sites. Once they identified an accurate address for a site, a “windshield survey” was performed to determine current site conditions and gather additional information on past operations at the site. These surveys generally included viewing the suspected location and its surrounding area and, in some instances, interviewing business owners and residents in the immediate vicinity. If these initial surveys indicated the need for further examination, the regions typically conducted a detailed investigation of the site. This investigation typically consisted of a site visit, which included a more thorough visual inspection of the property and surrounding area; additional interviews with people who might be knowledgeable about past operations, such as facility representatives; reviews of any relevant and available documentation from state and federal agencies; and, if deemed necessary, collection of soil and air samples. As indicated in table 1, EPA conducted site visits to at least 241 of the sites. At least 19 sites were not visited because either initial efforts to determine site locations were unsuccessful or information gathered while pre-screening the sites indicated that a site visit was not necessary. For example, for a site located in Stanton, North Dakota, company officials indicated in a letter that the company purchased a relatively small amount of Libby ore in the early 1980s and had since obtained vermiculite ore from a mine in Virginia. The company officials provided EPA Region 8 with a lab analysis of the ore from the Virginia mine, which indicated no asbestos was present in the ore. As a result, EPA Region 8 concluded a site visit was not necessary. For the sites where the regions decided sampling was warranted, samples of “bulk” materials—such as raw vermiculite ore, suspected waste vermiculite piles, and soils—were collected. Air samples were collected if there was concern that disturbing contaminated materials (in the soil or elsewhere) could result in asbestos fibers migrating into the air and being inhaled. Based on information obtained during the site visits, bulk and, in some cases, air samples were collected for at least 80 (30 percent) of the sites (as shown in table 1). One of the most important factors EPA regional offices considered in determining whether a site needed to be cleaned up was the amount, if any, of asbestos present at the site. In general, a cleanup would be performed if sampling results indicated that asbestos was present in amounts greater than 1 percent (based on the percentage of the area of a microscopic field) in soils or debris or greater than 0.1 asbestos fibers per cubic centimeter of air. According to EPA, the “1 percent threshold” for asbestos in soils or debris is not a health-based standard, but is rather related to the limit of detection for the analytical methods available during the early years of EPA’s asbestos program (early 1970s), and to EPA’s desire to concentrate resources on materials containing higher percentages of asbestos. EPA has never determined that materials containing less than 1 percent asbestos are safe, and scientists have not been able to develop a safe level for exposure to airborne asbestos. Of the sites sampled, 22 had levels of asbestos that that exceeded the thresholds, 29 had detectable levels of asbestos that were below the thresholds (trace amounts), and 26 sites had no detectable levels of asbestos. After reviewing the sampling results and other pertinent information collected about the sites, EPA—and in some instances states—identified 19 sites where contamination from the asbestos in Libby ore needed to be cleaned up. Figure 5 includes a map showing the location of the 19 sites that were identified for cleanup. With the exception of one site, all of the sites that needed to be cleaned up had levels of asbestos in soils that exceeded the 1 percent threshold. For the one exception, a site located in Salt Lake City, all of the soil samples contained trace amounts of asbestos (less than 1 percent). However, after raking the ground and using a leaf blower, EPA collected air samples which showed elevated levels of asbestos fibers that exceeded the threshold of 0.1 asbestos fibers per cubic centimeter of air. As a result, EPA determined this site needed to be cleaned up as well. In conjunction with EPA’s efforts to evaluate sites that received Libby ore, ATSDR is conducting a project—the National Asbestos Exposure Review—to investigate selected sites that received and processed ore from the Libby mine. These investigations—referred to as health consultations—involve evaluating information about toxic material at a site, determining whether people might be exposed to it, and reporting what harm exposure might cause. Health consultations may be performed by ATSDR staff or by state health department officials working under a cooperative agreement with ATSDR. The consultations may consider what levels (or concentrations) of hazardous substances are present; whether people might be exposed to contamination and how (through “exposure pathways” such as breathing air, drinking or coming into contact with water, eating or coming into contact with soil, or eating food); what harm the substances might cause people (or the contaminants’ “toxicity”); whether working or living nearby might affect people’s health; and other dangers to people, such as unsafe buildings or other physical hazards. Every health consultation includes ATSDR’s conclusions about public- health hazards and recommendations for actions to protect public health. These can include recommended follow-up activities for EPA, state environmental and health agencies, and ATSDR. For example, the recommendations could be related to (1) cleaning up sites; (2) keeping people away from contamination and physical dangers—for example, by placing a fence around a site; (3) giving residents safe drinking water; (4) relocating exposed people; (5) providing health education for residents and health-care providers to inform them about site contaminants and harmful health effects; and (6) performing additional health studies. ATSDR is conducting the National Asbestos Exposure Review in two phases. In Phase 1, it is conducting health investigations of 28 sites. These 28 sites together received about 80 percent of the vermiculite ore believed to have been shipped from the Libby mine between 1964 and 1980 (see fig. 6). As of June 2007, ATSDR had completed investigations at all 28 sites. For each site, ATSDR has issued a health-consultation report and a fact sheet summarizing the results of the site evaluation. Phase 1 will conclude with the completion of a report summarizing all 28 site investigations. This report will likely be released in late 2007 or early 2008. In Phase 2 of the National Asbestos Exposure Review, ATSDR will build on work from Phase 1 to determine the need for public-health activities at additional sites that received Libby ore. ATSDR’s role during Phase 2 will vary from providing technical support or advice to other agencies to possibly conducting additional public-health activities. In selecting the 28 Phase 1 sites, ATSDR selected sites that would be more likely to pose public-health risks because the sites (1) had been designated by EPA as requiring further action based on current contamination, or (2) were exfoliation facilities that processed more than 100,000 tons of vermiculite ore from the Libby mine. ATSDR’s general conclusions about past and current exposures to asbestos from the contaminated Libby ore at the 28 sites included the following: Former employees at the facilities that processed the asbestos- contaminated vermiculite ore were most at risk for exposure. Those who lived in the employees’ homes may have also been exposed because asbestos fibers could have been carried home on the employees’ clothing, skin, and hair. People could have been exposed to asbestos if they handled or played in waste rock, a by-product of vermiculite exfoliation. At some of the vermiculite plants, workers or people in the community may have brought the waste rock from the plants to their homes to use in gardens and as fill or driveway surfacing material. If this waste rock is uncovered and stirred up, asbestos fibers may be released into the air. Determining the extent to which former and current residents were or could currently be exposed to waste rock on their properties was not possible at most sites given a lack of knowledge about whether or to what extent past community members may have taken waste material home. People living around the plants could have been exposed to asbestos fibers in the air when vermiculite was being processed at the sites. Determining whether former residents were exposed to hazardous levels of asbestos was not possible at most of the sites given a general lack of data on past emissions from the facilities. Since the plants no longer process Libby ore, current residents living around the sites are no longer being exposed through air emissions from processing activities at the plants. As a part of its on-going work to assess public-health risks at the Phase 1 sites, ATSDR has also reported significant gaps in scientific data used to evaluate health risks associated with exposure to the type of asbestos fibers found in Libby ore. ATSDR has pointed out that evaluating health effects requires extensive knowledge of both the ways in which people were exposed and the level of asbestos that is harmful to humans (i.e., the toxicity of the asbestos). According to ATSDR, the public health implications of exposures to these fibers are difficult to determine in part because the toxicological information currently available for the asbestos fibers found in the Libby ore is very limited. Also, in a May 2003 Public Health Assessment of the Libby site, ATSDR recommended that “more research is needed, specifically: toxicological investigation of the risks associated with low-level exposure to asbestos, especially Libby asbestos; clinical research on treatment for mesothelioma and asbestosis; and epidemiology studies to better characterize the link between exposure to asbestos and disease.” ATSDR has also noted that the 1 percent threshold used in determining when sites need to be cleaned up is not health based. Furthermore, the agency cited EPA studies showing that disturbing soils containing less than 1 percent asbestos can suspend fibers in the air at levels that cause a health concern. Therefore, ATSDR concluded it is unclear whether sites that were not cleaned up and with asbestos levels of less than 1 percent were safe. In addition, ATSDR stated that the sampling and analysis methods used by EPA at some of the sites were limited in their ability to detect and measure asbestos fibers. In fact, recent health-consultation reports for two sites in Portland, Oregon, issued by the Oregon Department of Human Services in consultation with ATSDR, pointed out that sampling and analysis methods have been improved since samples were taken at those sites in 2000 and that new methods are better able to quantify levels of asbestos. As a result, the health-consultation reports for those sites recommended, among other things, that EPA conduct additional sampling at these sites to ensure people are not being exposed to residual fibers. After conducting additional sampling at one of these sites, EPA determined the site required further cleanup. “to ban the use of materials which contain significant quantities of asbestos, but to allow the use of materials which would: (1) contain trace amounts of asbestos which occur in numerous natural substances, and (2) include very small quantities of asbestos (less than 1 percent) added to enhance the material’s effectiveness.” This memo acknowledged that the widespread use of the 1 percent threshold may have caused EPA managers at cleanup sites to assume that levels below that threshold did not pose an unreasonable risk to human health. The memo stated that it is important to note the 1 percent threshold was related to (1) the limit of detection for the analytical methods available in the early 1970s and (2) EPA’s decision to focus its resources on materials containing higher percentages of asbestos. The memo further noted the threshold may not be protective of human health in all instances. It stressed that regions should not assume soil or debris containing less than 1 percent asbestos does not pose an unreasonable risk to human health and should instead develop risk-based, site-specific action levels to determine if response actions should be undertaken. However, the memo clearly stated that this information did not constitute a regulation nor did it impose legally-binding requirements on EPA. In November 2005, EPA issued its Asbestos Project Plan. The plan provided a framework for a coordinated agency-wide approach to identify, evaluate, and reduce the risk to human health from asbestos exposure. Among other things, the plan focused on improving the state of the science for asbestos through a number of steps, including activities to improve EPA’s (1) understanding of asbestos toxicology, (2) understanding of asbestos-related exposures, and (3) ability to perform meaningful environmental sample collection and analysis. When asked about the status of these activities and funding provided to accomplish the Asbestos Project Plan, EPA responded that the plan was developed only to provide an overview of various ongoing and planned agency-wide activities to address risks from asbestos, and that it was never intended as an ongoing strategy with timelines for deliverables and budget tracking features. Nevertheless, according to EPA, by pursuing activities outlined in the plan the agency has made progress in improving the state of the science for asbestos. Among other things, it has undertaken work to (1) develop a methodology for estimating the risk of lung cancer and mesothelioma from inhalation exposure to different forms of asbestos; (2) update the asbestos health-effects information contained in the EPA’s Integrated Risk Information System (IRIS); (3) develop methods for identifying the presence of asbestos in vermiculite attic insulation; and (4) test an alternative method for removing asbestos from buildings. In December 2006, EPA’s Office of Inspector General reported that EPA had not completed a toxicity assessment of the type of asbestos found in the Libby ore and that this information was necessary to determine the safe level of exposure for humans. Furthermore, the Office of Inspector General reported without such information EPA cannot be sure that the cleanup actions taking place in Libby sufficiently reduce the risk that people may become ill from asbestos exposure or, if already ill, get worse. When asked by the EPA’s Office of Inspector General’s staff why a toxicity assessment had not been performed, officials from EPA’s Office of Solid Waste and Emergency Response (OSWER) replied that an assessment was proposed but was not performed because it was not funded and because OSWER believed the information could be obtained through completed and ongoing epidemiological studies. According to the report, however, OSWER program staff, as distinguished from OSWER senior officials, said the epidemiological studies that were ongoing and planned would not be sufficient to determine the toxicity of the asbestos in the Libby ore. As a result, the EPA Office of Inspector General recommended that EPA fund and execute a comprehensive asbestos toxicity assessment to determine (1) the effectiveness of the Libby removal actions and (2) whether more actions are necessary. Shortly after the Office of Inspector General’s December 2006 report was issued, EPA agreed to conduct additional toxicological and epidemiological studies for the type of asbestos found in the Libby ore. In January 2007, EPA convened a group of more than 30 scientists from EPA, ATSDR, and the National Toxicology Program to identify data gaps and recommend additional studies. According to EPA, a Libby Asbestos Action Plan initiated at this meeting includes recommendations for 12 additional studies. Detailed work plans for five of these studies have been completed with consultation from other agencies and external peer reviewers. Two other studies are continuations of ongoing efforts. Detailed work plans for the remaining five studies are currently being finalized. All studies are scheduled to be completed by the end of calendar year 2009. The milestone date for completing the baseline risk assessment, including the comprehensive toxicity assessment, is the end of fiscal year 2010. At most of the 13 sites for which EPA had public-notification responsibilities, EPA regions did not implement key notification provisions of NCP. At five sites, EPA regions did not perform notification activities beyond those listed in NCP, even though EPA guidance strongly recommends the regions do so. State and local government officials had mixed views about how effective EPA was in notifying them about cleanups in their jurisdictions—some state and local officials reported a positive experience working hand-in-hand with EPA, while others said EPA had not notified them at all. Similarly, while community members participating in two of three focus groups were disappointed overall in EPA’s efforts to inform them about cleanups in their neighborhoods, the participants in the third group were very satisfied with EPA’s efforts. As the lead agency responsible for notifying the public of cleanup activities taking place at 13 of the cleanup sites, EPA was required by NCP regulations to take certain steps, as appropriate, to inform the public about the cleanup activities. All 13 sites were classified as time-critical removal actions, which means EPA must begin cleanup at the sites within 6 months of determining that a removal action is appropriate. Figure 7 shows the locations of the 13 sites. For all 13 sites, EPA was required to take the following public-notification steps: Designate an agency spokesperson. This representative must inform the community of actions taken, respond to inquiries, and provide information concerning the release of hazardous substances. Notify affected citizens. The spokesperson must, at a minimum, notify citizens immediately affected by the release of hazardous materials, as well as state and local officials, and when appropriate, civil defense or emergency management agencies. Create an administrative record. EPA must establish an administrative record containing documents that form the basis for the cleanup action selected and make this record available for public review. Notify the public about the administrative record. Within 60 days of initiating cleanup activities, EPA must publish an announcement in a major local newspaper indicating that the administrative record is available for review. Hold a public-comment period, as appropriate, and respond to comments. From the time the administrative record is made available for review, EPA must provide the public no less than 30 days to provide comments about the cleanup. EPA must prepare a written response to significant comments. When time-critical cleanup activities are expected to last more than 120 days, because there is more time for community involvement and outreach, NCP requires the following additional notification activities be performed, as appropriate: Establish an information repository. To provide the public easier access to site-related documents, EPA must establish at least one information repository at or near the location of the cleanup site. At least one repository must have the administrative record file available for public inspection. Notify the public about the repository. EPA shall inform the public that it has established an information repository and provide notice that the administrative record is available for review. If EPA knows that cleanup activities will extend beyond 120 days, it can publish a single public notice announcing the availability of the repository and the administrative record. Conduct community interviews. EPA must conduct interviews with local officials, community residents, public-interest groups, or other interested parties, as appropriate, to solicit their concerns, their information needs, and their views on how and when they would like to be involved in the cleanup. Prepare a Community Relations Plan. Using information gathered from the community interviews and other sources, EPA must prepare a formal Community Relations Plan specifying the community-involvement activities the agency expects to conduct during the cleanup. According to EPA regional officials, key public notification provisions of NCP were not implemented at 8 of the 13 cleanup sites. Specifically, regional officials told us the following: At the Great Falls, Montana site (Region 8), regional officials did not establish an administrative record, did not place a notice announcing the record was available for review, and did not hold a public-comment period. According to Region 8 officials, they did not create a formal administrative record because they made a mistake in processing the site’s file and did not discover the mistake until after the cleanup was completed. Before the cleanup, Region 8 did provide an information packet equivalent to an administrative record to the owner of the site where the cleanup occurred and to the state of Montana. Region 8 officials said they have since established a formal standard-operating procedure for completing such tasks, which includes assigning tasks to specific personnel and program offices within the region. At the Denver, Colorado site (Region 8), although officials established an administrative record, they did not notify the public that the record was available for review and did not hold a public-comment period. The omissions occurred because the employee responsible for placing the notices had retired. During the time the position was vacant, the region did not place public notices for some other removal actions. Region 8 has since filled the position and, in December 2003, it established formal procedures for setting up repositories and publishing notices; the procedures include assigning these responsibilities to specific EPA program offices and staff. For both the Minneapolis, Minnesota, and Dearborn, Michigan, sites (both in Region 5), the region established administrative records and placed notices about their availability, but it did not hold public-comment periods. EPA Region 5 officials explained that they do not believe that NCP requires EPA to hold a comment period for removal actions, rather, they said NCP allows EPA latitude to determine whether a comment period is appropriate for removal actions. Their general view is that a comment period is not appropriate for time-critical and emergency- removal actions because they need to proceed quickly and because there is typically not a range of options to be considered. In such cases, regional officials said it is more important to focus on other community-outreach and community-relations activities. At the Wilder, Kentucky (Region 4), Minot, North Dakota (Region 8), and Phoenix, Arizona (Region 9) sites, regional officials posted notices of availability in local newspapers, but they did not place the notices within 60 days of the start of the cleanup as provided in NCP. At two sites, regional officials did not know why the notices were delayed. At the Minot site, the notice was placed 22 days after the deadline and 2 days after the cleanup was completed; and at the Wilder site, the notice was placed 6 days after the deadline. At the Phoenix site, regional officials said the staff person who was responsible for placing the notice had resigned and that position was still vacant at the time the notice should have been placed. The notice was placed 42 days after the deadline and 90 days after the cleanup was completed. At one of the sites located in Salt Lake City (Region 8), regional officials did not prepare a formal community-relations plan, even though regional officials thought the cleanup could take more than 120 days to complete. Region 8 officials explained that, at the time the memorandum justifying the need for the cleanup was issued, it would have been reasonable to expect that the initial scope of the cleanup would be completed within 120 days. Unfortunately, additional contamination was discovered during a portion of the cleanup, which required the completion date to be extended. However, the memo justifying the cleanup indicated the cleanup might exceed 120 days. Specifically, the memo stated “total costs of the removal action are anticipated to exceed $2 million due to the size of the properties and the extensive amount of soil contamination; and the large amount of excavation and monitoring of landscape restoration may cause the removal to extend past 12 months.” Region 8 officials said that even though a plan was not prepared for this site, the region conducted all substantive community-relations activities that would have been documented in a formal community-relations plan. Since the 1980s, EPA has issued policy and guidance documents providing more direction to regional offices on how to ensure meaningful public involvement in the agency’s decision making processes, including decisions related to the cleanup of hazardous waste. The key guidance issued by EPA includes: January 1981. EPA issued its Public Participation Policy that provided overall guidance and direction about reasonable and effective means of involving the public in program decisions to public officials who manage EPA programs. This policy defined public participation as that part of EPA’s decision-making process that provides opportunity and encouragement for the public to express their views to the agency, and assures that the agency will give due consideration to public concerns, values, and preferences when decisions are made. July 1992. EPA published public participation guidance for on-scene coordinators, who are responsible for directing cleanups. This guidance stressed the need to (1) inform the public of the degrees and types of risks associated with a site, planned or ongoing actions, and other issues; (2) provide the public with an opportunity to comment on decisions about the site; and (3) identify and respond to community concerns. April 2000. The Director of EPA’s Office of Emergency and Remedial Response instructed all EPA regional offices to contact related state or tribal and agency officials to notify them of the potential evaluations of sites that received ore from Libby, Montana, and to gather relevant information from these officials and solicit their participation in site activities. April 2001. The EPA Administrator issued a policy memorandum that endorsed “vigorous public outreach and involvement.” October 2001. In an effort to encourage more substantive involvement of communities from the very outset of a cleanup, the Acting Director of EPA’s Office of Emergency and Remedial Response issued a policy memorandum supporting “early and meaningful community involvement.” This memo stressed that even if the cleanup is an emergency removal, community involvement should not be neglected or postponed. The memo stated that while initial calls should be to state and local authorities, soon thereafter, efforts should be made to reach out to the entire community, which may have a high level of anxiety and concern about health and safety. April 2002. EPA issued the Superfund Community Involvement Handbook that contained detailed guidance on how to conduct public-notification activities. This guidance states that while it is up to EPA officials in charge of a site cleanup to decide what public-notification activities are needed based on a site’s circumstances, EPA’s experience has shown that, at most sites, just complying with NCP provisions is not sufficient to adequately meet community needs. This guidance recommends that regions use many other notification activities, such as distributing fact sheets to let residents know about EPA’s activities; hosting public meetings to deliver information to large groups of people; and, if community demographics indicate a need, translating documents into appropriate languages. September 2002. EPA issued the Superfund Community Involvement Toolkit, which provided EPA community involvement staff with practical, comprehensive, easy to use guidance for designing and enhancing community involvement activities. The Toolkit includes guidance on how to conduct both required and recommended notification activities, such as how to place public notices and how to conduct public meetings. The Toolkit indicated an expectation that EPA staff should not just distribute information to the public; it should promote meaningful citizen participation in the decisions affecting sites. As indicated in table 2, EPA regions varied greatly in the extent to which they followed the agency’s guidance for conducting public-notification activities—with 9 of the 13 sites employing at least some of the notification activities that went beyond NCP provisions. For the cleanup sites located in Dearborn and Minneapolis (Region 5), EPA engaged in many of the notification activities that are recommended by NCP provisions. For example, at the Dearborn site, EPA coordinated with the Arab Community Center for Economic and Social Services to determine the best approach for providing information about the cleanup to the Arab- American residents living near the site. EPA also distributed fact sheets, printed newspaper notices in both English and Arabic, went door-to-door to notify residents about the cleanup, and hosted two public meetings, and conducted two direct mailings. At the Minneapolis site, EPA went door-to- door to discuss the cleanup with residents, held several public meetings, and distributed fact sheets. However, for the sites located in Glendale, Newark, Phoenix, and Honolulu (Region 9), and for the first phase of the cleanup of the site in Hamilton Township (Region 2), EPA did not engage in notification activities beyond those required by NCP provisions. According to both Region 2 and Region 9 officials, even though residential areas were located near each of these sites, additional community- outreach activities were not performed because the site settings, limited scope of the removals, and the nature of the removal activities led them to conclude that it would not be necessary. State officials we spoke with were mostly satisfied with EPA’s efforts to inform them about site cleanups in their jurisdictions. That is, state officials from 7 of 12 sites were generally satisfied with EPA’s public- notification efforts (North Dakota officials did not respond to our request for their views about the Minot site). At five of the seven sites (Glendale, Denver, Dearborn, and the two sites located in Salt Lake City), state officials explained that when EPA is the lead agency for a site, they typically expect EPA to inform them about cleanups but do not expect to be involved in the final decision-making process. For these sites, the state officials were pleased with EPA’s efforts to keep them informed about the site evaluations, sampling results, and cleanup activities. At the other two sites (Minneapolis and Wilder), state officials reported they worked hand-in-hand with EPA officials and were extremely pleased with EPA’s efforts to keep them informed about site activities. For example, officials from the Minnesota Pollution Control Agency (MPCA) collected samples with EPA Region 5 at the Minneapolis site and officials from both agencies agreed the site needed to be cleaned up. EPA and MPCA held joint public meetings to inform residents about the contamination and went door-to-door in a wide area to determine if residents had taken contaminated waste materials from the site to their homes. Also, Minnesota Department of Health officials reported working closely with EPA and MPCA to review site cleanup plans, ensure that contractors were properly licensed, and obtain access to residential properties so they could be tested for the presence of asbestos. Similarly, for the Wilder site, officials from the Kentucky Department of Environmental Protection (KYDEP) reported that EPA Region 4 officials continually communicated through e-mails, telephone calls, written correspondence, and meetings. KYDEP officials worked closely with EPA at the site, providing general oversight on the cleanup, including removal and disposal of the asbestos-contaminated materials. They coordinated with EPA on all aspects of the planned removal and reported that EPA staff were very professional, knowledgeable, helpful, courteous, and visible. For three sites (Honolulu, Great Falls, and Hamilton Township), state officials said they were not satisfied with EPA’s efforts to inform them about cleanup activities. Honolulu. Officials from the Hawaii Department of Health (HDOH) said that an EPA Region 9 official stopped by their offices and mentioned that the Honolulu site had received vermiculite ore from Libby, Montana. About a year later, HDOH officials said they were copied on a letter from Region 9 stating that there had been a release of asbestos at the site. Subsequent to receiving this letter, an EPA Region 9 official stopped by the HDOH offices “as a courtesy” to let them know EPA would be conducting a removal action at the site. However, HDOH officials said they did not receive any more information from EPA about the site and that they would have preferred having more advance notice about the cleanup and information about the status of the cleanup as it was being conducted. Great Falls. An official from the Montana Department of Environmental Quality (MDEQ) was very dissatisfied with EPA Region 8’s lack of notification about the cleanup. The site was a residence that was being cleaned up because a former owner of the property who had worked at a vermiculite processing facility in Great Falls had taken contaminated waste product home to use on his driveway. The MDEQ official first became aware of the site through an asbestos-abatement contractor who had heard about the cleanup. The MDEQ official said he went to investigate the site because EPA typically coordinates such matters with him. The MDEQ official said he was not sure why EPA did not inform him about the cleanup, but he considered this “slipshod” behavior. Hamilton Township. Officials from the New Jersey Department of Environmental Protection (NJDEP) said they first learned the site was contaminated with asbestos when they were copied on an EPA Region 2 memorandum stating that the site needed to be cleaned up. They said they received copies of two more EPA reports about the site before being invited to a stakeholder meeting in March 2005 (approximately 1 year after the completion of the first phase of the site cleanup) to discuss the site cleanup. The NJDEP officials said that EPA had improved its public- notification efforts during the second phase of the site cleanup. For example, since the beginning of the second phase, EPA has held several public meetings and issued numerous community updates. The NJDEP officials felt that EPA should have notified them and local government officials about the first phase of the cleanup in the same manner as was done for the second phase. In general, NJDEP officials said EPA could improve public-notification efforts by, among other things, providing additional public notices to state and local officials, keeping the site’s Web site up-to-date, and by asking for and obtaining feedback from community members about what their notification needs are, and then providing this information to state and local agencies. For the remaining sites in Phoenix and Newark, state officials said they were neither entirely satisfied nor entirely dissatisfied about Region 9’s efforts to inform them about the site cleanups. Specifically, officials from the Arizona Department of Environmental Quality said they received a report from Region 9 indicating that EPA was assessing sites that had received Libby ore and that the Phoenix site was being assessed. A letter accompanying the report indicated the Phoenix site would be cleaned up, but did not indicate when the cleanup would occur. While Arizona officials found it helpful that EPA kept them informed about the assessments of sites that had received Libby ore, they said it would have been better if EPA had informed them ahead of time about when the Phoenix site would be cleaned up so they could have been better prepared to answer the public’s questions about the cleanup. For the Newark site, an official from the California Department of Health Services said EPA did not provide any information to them directly about the site. Instead, they received most of their information from ATSDR, who they understood was working closely with EPA. Since the California Department of Health officials’ view their role in such situations as providing support to ATSDR, the official said the Department would not necessarily expect EPA to notify it about site cleanups. However, as a part of its efforts to help ATSDR disseminate information to communities, in September 2003, the California Department of Health found that officials in the City of Newark and in the county government were not aware of the cleanup or the site’s history (the site cleanup began in April 2002). Of the seven local governments that provided their views on EPA’s efforts to inform them about cleanups within their jurisdictions, three (Dearborn, Minneapolis, and Salt Lake City) said they were satisfied. Dearborn. City officials said EPA Region 5 did everything that could have been done to inform the public about the cleanup. According to these officials, EPA informed the mayor’s office very early in the process and asked the city to appoint a liaison to work with EPA on the site cleanup. City officials also said EPA met with local government officials and the emergency-management coordinator to determine any concerns they might have. Overall, city officials thought EPA was professional, in control of the situation, and cognizant that they needed to maintain frequent contact with the residents. Minneapolis. City officials said they already had a good working relationship with EPA Region 5 and were impressed with EPA’s efforts to be open and available to the community through, among other things, public meetings and door-to-door contacts. They said that EPA was very upfront with city officials, established good credibility with members of the community, and was respected by local activist groups. Salt Lake City (two sites). Officials from the Salt Lake City government said EPA’s interaction with the local government was excellent and EPA staff were always accessible to discuss their concerns. EPA Region 8 staff first called them to explain that the sites had processed asbestos- contaminated ore from Libby and were likely contaminated. When the city public utility offices raised concerns about whether contamination under the streets near one of the sites was a threat to their employees, EPA met with them to address their concerns. Once EPA began the removal action, EPA kept the local government informed via weekly e-mails, three meetings, and a site visit. There were four sites (Newark, Wilder, Great Falls, and Hamilton Township) where local government officials said they were somewhat to largely dissatisfied with EPA’s notification efforts. Newark. A city official said a Newark Fire Department official first found out about the site cleanup from county health department officials and the California Department of Health. After hearing about the contamination and activities at the site, the fire department official informed the city manager and the city’s executive team. The city officials said that EPA Region 9 had very little contact with the local government as the cleanup proceeded. Wilder. A city official said he first learned about the site from a local newspaper reporter and that EPA Region 4 notified the city after it decided to clean up the site. According to this official, if the city had known earlier, it could have cordoned off the area to prevent children from riding their bikes through the site. The city official was also concerned that EPA did not do enough to contact former workers and identify people who took asbestos-contaminated waste rock from the site to use in their yards. Great Falls. A city official at the Great Falls site said EPA Region 8 did not notify the city about the cleanup. After finding out about the cleanup from an asbestos-abatement contractor, the city official decided to investigate the site. The city official discovered the EPA contractor performing the removal was not licensed to do work in the city. In the opinion of the city official, EPA should have notified the state government about cleanup activities and should have asked the local government to appoint a liaison to work with EPA on matters concerning the cleanup. Hamilton Township. During Phase I of the cleanup, township officials said EPA Region 2 invited an official from the Hamilton Township Department of Health to attend a visit to the site. During this visit, township officials said the city health department official was told that EPA was going to clean up the site. Township officials said that other than EPA’s request for a permit to place a construction trailer on the site, they did not receive any further communication until after the first phase of the cleanup was completed. At that time, township officials said the New Jersey Department of Health asked the Hamilton Township Department of Health to help organize a public meeting about the second phase of the cleanup; the Hamilton Township Department of Health then informed the mayor’s office about the cleanup. According to township officials, while EPA did place an administrative record for the site in the local library, the agency did not notify local officials that it was available for review. Township officials said that since the second phase of the cleanup began, EPA has been doing a “great job” keeping local officials informed. According to township officials, the catalyst for change was getting the mayor’s office involved in the cleanup. In their opinions, because staffs in mayors’ offices can help ensure communities are informed and that all parties are working together, it is important for EPA to keep mayors’ offices informed about cleanup activities. Ultimately, it is the affected community members who most need information about the health risks posed by the presence of asbestos contamination in their neighborhoods. Accordingly, to obtain detailed insights into the effectiveness of EPA’s efforts to reach these individuals, we conducted focus groups at three sites—Hamilton Township, New Jersey; Minot, North Dakota; and Dearborn, Michigan. We discussed five key issues at these locations: (1) how the community members first became aware of the cleanup; (2) the content, visibility, and usefulness of the public notices EPA placed to inform the community about the cleanups; (3) overall views of EPA’s efforts to notify the community about the cleanup; (4) information about site cleanups that community members need; and (5) the best methods to reach out and inform affected members of the community. Overall, participants in Dearborn were supportive of EPA’s efforts, but their counterparts at the other two sites generally characterized EPA’s notification efforts as ineffective. According to the NCP provisions, EPA must at a minimum notify immediately affected citizens and others of cleanup activities. EPA notification guidance recommends that EPA perform outreach and other community-involvement activities as early as possible. For example, the guidance suggests EPA could meet with local officials, media, and residents during the initial site assessment to explain EPA’s removal program. At two of the three sites, however, most discussion group participants said EPA did not notify them about the cleanups before they began. At Minot, nearby residents said they did not know anything about the cleanup until they saw contractors in “space suits” working at the site. At Hamilton Township, most focus-group participants said they found out about the site cleanup through articles in local newspapers. In contrast, participants in the Dearborn focus group said they first heard about the cleanup when EPA officials canvassed the neighborhood delivering letters explaining what was happening at the site and through public meetings in the neighborhood. The NCP public-notification provisions state that within 60 days of initiation of cleanup activities, EPA must publish an announcement in a major newspaper indicating the administrative record, which discusses EPA’s planned cleanup action, is available for public review. Furthermore, the provisions state that EPA must provide a public-comment period, as appropriate, of not less than 30 days from the time the administrative- record file is made available for public inspections. EPA guidance describes critical information that should appear in public notices and states that they should contain (1) background information about the site, which may include the location of the site and the contaminant involved; (2) the location of the information repository and the hours during which the repository is open; (3) the dates of the public comment period, if applicable; (4) the time, date, and location of the public meeting, if applicable; and (5) the name of the agency contact to whom written comments on the administrative record file should be addressed. The guidance also states that public notices should be placed in well-read sections of newspapers and specifically indicates that if a well-written notice is hidden in the classified section of a newspaper, it will not reach many people. The guidance also recommends using a simply-stated message in easily understood language. It even includes WordPerfect® templates of public notices with graphics to help regional staff easily modify the text to fit site-specific needs. Based on this guidance, the notices EPA placed for all of the three focus- group sites were deficient in some respects. In particular, the notice for the Hamilton Township site did not give the address of the site, did not mention the contaminant of concern, and did not provide the dates of the public-comment period. This notice also appeared in the classified section of a local newspaper among many other classified advertisements. Figure 8 shows the content and placement of the Hamilton Township notice. Although the notice for the Minot site appeared in a well-read section of a local paper, it appeared in very small print, did not contain the contaminant of concern, or the dates of the public-comment period. In contrast, the notice for the Dearborn site appeared in well-read sections of multiple newspapers and contained all the critical information except the hours during which the repository would be open (see fig. 9). We asked participants from the three focus groups to evaluate the usefulness of the public notices that EPA had placed for the sites in their neighborhoods. Focus group participants at two of the sites (Hamilton Township and Minot) said they did not see the notices when they were published. After examining the notices during the focus-group meetings, all the participants said the notices did not indicate a threat to their health, did not leave them with the impression that they were to seek out additional information, or that there was a site in their neighborhood contaminated with a hazardous material. For the Hamilton Township site, one participant said the notice gave the impression that all the studies had been completed and nothing more was to be done. For the Minot site, the participants said the notice was in such small print that it would be hard to find in a newspaper, especially if the notice ran for only one day. Another participant from Minot said they would probably ignore the notice because it does not convey useful information and is very bureaucratic and vague. After examining the Minot notice, one participant who owns a business in the city commented, “I run ads for a living, and if I ran ads like that, our company would’ve been broke a long time ago.” All but one of the participants in the Dearborn focus group said they had seen the notice for the site when it was published, and all the participants commented that it was placed in a well-read section of a newspaper and conveyed useful information up front. This information included the address of the site, the contaminant involved, essential information about a public meeting, and contacts for further information. When the Dearborn group compared the notice for that site with the Hamilton Township notice, they commented that the Dearborn notice was much clearer and the Hamilton Township notice lacked key information, such as the location of the site and the contaminant of concern. For two of the three focus groups (Hamilton Township and Minot), participants reported that EPA’s efforts to inform them about the cleanups were largely ineffective. For the Hamilton Township site, most of the participants said they did not receive any fliers or any other information from EPA about the cleanup. None of the participants in the Minot focus group said they had heard anything about the cleanup before it began, even though they all lived close to the site. None said they had received any fliers or saw EPA officials walking around the neighborhood. One participant, whose backyard borders the site, said he noticed workers in hazmat suits working at the site and asked them what they were doing. The participant said the engineer in charge of the cleanup provided him with information and agreed to set up air monitors to ensure that he and his neighbors were not exposed to elevated levels of asbestos during the cleanup. None said they had heard about the administrative record for the Minot site or about any opportunities for providing comments to EPA. In contrast, participants in the Dearborn focus group said EPA effectively informed the community about the cleanup. They reported that EPA held several public meetings and even had a wrap-up meeting after the cleanup was completed. The participants said all the notices, fliers, and letters had contact information on them in case the residents had questions, and EPA had an information trailer at the cleanup site where residents were welcome to stop in with their questions. In addition, according to the participants, EPA officials were always readily available to respond to concerns. For example, when EPA became concerned that some residents might have taken the contaminated waste product home to use in their yards, the participants said EPA walked around the neighborhood and hand delivered letters asking permission to access people’s properties for inspection. Also, according to one participant, when some residents expressed concern about the spread of contamination during windy conditions, EPA set up monitors and stopped work at the site when the wind speed went above a certain level. Finally, because of the number of Arab-American residents in the community, participants said EPA provided notices and letters in both English and Arabic. For those focus-group participants who did not have an opportunity to ask EPA questions about the site cleanups, we asked what information they would have wanted EPA to provide. While Dearborn participants said they had ample opportunities to ask EPA questions and received the information they needed, participants in the other two focus groups (Hamilton Township and Minot) said they would have asked questions about the following: Sampling, including what areas EPA sampled; whether there would be any off-site sampling; the results of the sampling; and how they could be sure their property was not contaminated. Conduct of the cleanup, including what areas are being cleaned up; how the soil will be removed and what precautions will be taken to keep asbestos fibers from becoming airborne; how EPA will dispose of the contaminated dirt; whether there will be a follow-up information session after the cleanup is completed; and whether there will be continued monitoring for a designated period of time after the cleanup. Health risks, including what health risks are associated with the site cleanup; what health risks are present before the site is cleaned up; who the contact is for questions about the risks and health effects associated with asbestos exposure. In the three focus groups, community members suggested several methods of notification that would have helped them understand the situation. In general, participants from all focus groups thought using multiple methods of communication would help ensure that more people are informed about cleanups. One participant pointed out, for example, if someone does not read a notice about a cleanup in a newspaper, they might find out about it instead by reading a flier that is placed on their door. Participants from all three groups agreed that fliers, letters, public meetings, and door-to-door contacts were effective. Some in the Hamilton Township focus group commented that since they received automated phone calls to remind them to vote, perhaps it would be possible for EPA to provide information about cleanups in a similar fashion. The importance of including contact numbers on all handouts, fliers, and letters was stressed by some participants in the Dearborn focus group. In addition, some of the Dearborn participants said that it was useful to have the trailer at the cleanup site. To its credit, EPA has agreed to undertake a risk and toxicity assessment for the type of asbestos found in Libby ore. It expects to complete the assessment in 2010. Until then, EPA cannot be assured that of the 271 sites that it assessed, only 19—those generally exceeding thresholds for asbestos contamination—need to be cleaned up; nor can it be assured that the sites still having detectable levels of asbestos do not pose a risk to public health and the environment. As we noted, the thresholds EPA used are not health based. Furthermore, the methods EPA used to determine levels of asbestos contamination early in its assessment process are not as accurate as currently available methods. Resampling the sites that EPA initially sampled with these newly available and more reliable sampling and analytical techniques would be a major commitment for EPA; this step may nonetheless need to be taken for at least some of these sites to provide a more accurate assessment of the threats they pose. Hence, in addition to identifying a defensible health-based threshold, EPA will also need to determine the implications of the new sampling and analytical techniques to determine which sites may still need to be cleaned up. Community members who live and work near sites where hazardous materials are being removed need to understand how cleanups are being conducted and have opportunities to voice any concerns they have. While EPA has recognized the need to obtain early and meaningful community involvement in cleanup decisions, and taken actions in recent years to strengthen its efforts to inform the public, we found that at the 13 sites where asbestos contamination from Libby ore was being cleaned up, several of the EPA regions did not fully implement NCP notification provisions and some did not adhere to the notification guidance. We believe this provides sufficient indication that similar problems may be occurring at other cleanup sites nationwide where EPA is responsible for conducting public-notification activities. Also, the feedback that we received during focus groups from community members living and working near cleanup sites indicates, among other things, that the notices EPA relies on to inform community members about cleanup activities were deficient in some respects. We recommend that the EPA Administrator direct the Assistant Administrator for Solid Waste and Emergency Response to determine (1) the manner and extent to which newly available sampling and analysis techniques should be used to re-evaluate the threat that the sites receiving Libby ore may pose to human health, and (2) whether any additional sites that received the Libby ore need to be cleaned up when the results of the risk and toxicity assessment—now scheduled to be completed in 2010— are available. We also recommend that the Administrator direct the Office of Solid Waste and Emergency Response to review regional offices’ implementation of the National Contingency Plan public-notification provisions and associated guidance and ensure that, in the future, (1) regional offices appropriately determine the extent of community outreach needed and (2) newspaper notifications are prominent and written in clear language that contains all critical information, such as the name of the contaminant, the location of the site, and the associated health risks. We provided a draft of this report to EPA and ATSDR for comment. EPA responded in a letter dated September 21, 2007, which indicated that it generally agreed with our recommendations and said that the agency is taking steps to address many of the issues identified in the report. Both EPA and ATSDR also provided technical comments which we incorporated as appropriate. Appendix II includes EPA’s September 21, 2007 letter, along with our point-by-point response to their individual comments. We are sending copies of this report to the congressional requesters and other interested parties. In addition, we will send copies to the EPA Administrator, the Secretary of Health and Human Services, and the Secretary of Labor. We will also make copies available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. We were asked to (1) describe how the U.S. Environmental Protection Agency (EPA) and other federal agencies assessed and addressed potential risks at the facilities that received asbestos-contaminated vermiculite ore from a mine in Libby, Montana, and the results of these efforts; and (2) determine the extent and effectiveness of the EPA regions’ efforts to notify the public about the cleanup of facilities that received the contaminated ore. Due to concerns of the Department of Justice and EPA that our work could impact an ongoing federal criminal case against W.R. Grace—the company that owned the vermiculite mine in Libby, Montana, and some of the processing facilities that received ore from Libby—and the need to avoid undue influence in the case, we designed our methodology to minimize direct contact with EPA staff. Accordingly, we obtained most of the information we needed about EPA’s assessments of the sites that received Libby ore and the agency’s public-notification activities at the sites that were cleaned up by submitting questions to EPA in writing; the agency provided written responses. We did not further pursue access to this information because we had sufficient data to respond to our objectives. To address the first objective, we obtained from the U.S. Department of Health and Human Services’ Agency for Toxic Substances and Disease Registry (ATSDR) a table of sites that had potentially received contaminated ore from Libby, Montana. This table was largely based on data that ATSDR received from EPA about each of the sites identified as receiving ore from the Libby mine. The table included, for each site, the location, type of facility, and limited information on the status of EPA’s assessments of the sites as of April 2003. The table also included information on the amount of ore received by each site as of April 2001. After revising the table to include only the information needed to address our objectives, we sent the revised table to EPA and requested that EPA verify, update, and complete the information in the table. We also submitted in writing a set of questions to clarify the data in the table and a set of questions to assess the reliability of the information in the table for the purposes of our report, focusing mainly on the data about the amount of ore received by each site. From March 2006 to May 2007, through a series of correspondences, we obtained EPA’s responses to our written questions and information about the site data, which are reflected in this report. Based on EPA’s responses regarding the accuracy and completeness of the information in the table of sites, we determined the data are adequate to provide conservative estimates of the amount of ore received by each site. We also collected and analyzed relevant documentation about sites from EPA’s Superfund record centers, which are public repositories. In addition, we collected and analyzed ATSDR’s health consultations prepared for selected sites that received ore from Libby, Montana. We also obtained and analyzed several documents that relate to EPA’s actions to clean up sites in Libby, Montana, and the sites that received Libby ore. These documents included: the National Contingency Plan (NCP) regulations that implement the Comprehensive Environmental Response, Compensation, and Liability Act of 1980 (CERCLA); February and April 2000 memoranda from EPA’s Director of the Office of Emergency and Remedial Response to all EPA regions regarding assessment of sites that received Libby ore; 2001 EPA Office of Inspector General’s report entitled “EPA’s Actions Concerning Asbestos- Contaminated Vermiculite in Libby, Montana”; GAO’s 2003 report entitled “Hazardous Materials: EPA’s Cleanup of Asbestos in Libby, Montana and Related Actions to Address Asbestos-Contaminated Materials”; and an August 2004 memorandum from the Director of EPA’s Office of Superfund Remediation and Technology Innovation to EPA regions regarding clarification of asbestos cleanup goals. To address the second objective, we limited our review to the 13 sites that were being cleaned up and for which EPA had public-notification responsibility. These sites were located in Phoenix, Arizona; Glendale and Newark, California; Denver, Colorado; Honolulu, Hawaii; Wilder, Kentucky; Dearborn, Michigan; Minneapolis, Minnesota; Great Falls, Montana; Minot, North Dakota; Hamilton Township, New Jersey; and two sites located in Salt Lake City, Utah. We interviewed officials from EPA’s Office of Solid Waste and Emergency Response to obtain general information about public-notification provisions to which EPA is subject and any guidance that EPA has issued instructing regional offices about their responsibilities for complying with these provisions. In April 2006, we submitted structured questions in writing to EPA’s headquarters and 10 regional offices to determine compliance with public-notification provisions and any additional community-notification efforts that took place at the 13 sites. From April 2006 to May 2007, through a series of correspondences, EPA provided responses to these questions and various follow-up questions in writing. We developed sets of structured questions to assist in obtaining state and local government officials’ perspectives on the public notification that took place in communities where cleanups occurred. To identify the state and local government agencies involved in the cleanups and officials in those agencies most knowledgeable about the notification that took place at each site, we obtained some names from the administrative records for the sites being cleaned up. In some cases, we asked EPA to provide the names of state and local agencies or officials they worked with during the cleanups. For sites where we only had the name of an agency, we called the agency and asked for the person who would be most knowledgeable about the site. We conducted these interviews in person and by telephone. We interviewed officials in the following state offices: Arizona Department of Environmental Quality, California Department of Toxic Substances Control and California Department of Health Services, Colorado Department of Public Health and Environment, Hawaii Department of Health, Kentucky Department for Environmental Protection, Michigan Department of Environmental Quality, Minnesota Department of Health and Minnesota Pollution Control Agency, Montana Department of Environmental Quality, New Jersey Department of Environmental Protection and New Jersey Department of Health and Senior Services, and the Utah Department of Environmental Quality. We also interviewed officials from the following local governments: Newark, California; Alameda County, California; Wilder, Kentucky; Dearborn, Michigan; Minneapolis, Minnesota; Great Falls, Montana; Hamilton Township, New Jersey; Minot, North Dakota; and Salt Lake City, Utah. To obtain community members’ perspectives on the extent and effectiveness of EPA’s public-notification efforts, we conducted focus groups to gather qualitative information about their attitudes, beliefs, and perceptions. Four focus groups were conducted in Wilder, Kentucky; Dearborn, Michigan; Minot, North Dakota; and Hamilton Township, New Jersey to ensure geographic diversity. In order to help compare notification practices across EPA regional offices, we selected sites that were located in different EPA regions. Other criteria for selection included the amount of ore received and whether the cleanup action had been completed or was ongoing. We contracted with a marketing research firm, Marketing Systems Group, to obtain randomly selected names, addresses, and telephone numbers of 100 community members who lived or worked within a half-mile radius of each of the sites. We mailed a letter and brief questionnaire to each randomly selected community member to provide them some background information about our study, obtain information about the number of years they had lived in the communities, and determine whether they would be willing to participate in a focus group. We contacted the community members who returned questionnaires indicating they would be willing to participate. To increase the number of focus-group participants, we called the community members who did not return questionnaires, to determine if they could participate. We also contacted former workers and their family members who lived in each community to determine if they would be willing to participate in focus groups. The focus groups had between 4 and 14 participants. In conducting the focus groups, the focus-group moderator encouraged the participants to speak freely. Following a GAO-developed discussion guide, the moderator asked the participants to give their perspectives on (1) how they first became aware of the cleanups, (2) the content and usefulness of public notices about the cleanups, (3) EPA’s overall efforts to notify their communities about the cleanups, (4) information that the community members need about site cleanups, and (5) best methods for informing them about the cleanups. While generating mailing lists for the focus-group sites, the contracting firm inadvertently provided contact information for residences that lived over one-half mile from the Wilder site. After the error was discovered, the contractor provided corrected contact information for residences within a half-mile of the site. However, because the people who attended the Wilder focus group were either former workers or residents who lived more than one-half mile from the site, we decided not to include the results of that focus group in this report. We also obtained and analyzed several documents that related to EPA’s responsibilities for notifying the public about cleanups at sites that received Libby ore. These documents included: the public-notification provisions of the NCP regulations that implement CERCLA as amended; EPA’s 1981 Public Participation Policy; EPA’s 1992 Public Participation Guidance for On-Scene Coordinators; EPA’s 1997 guidance on Publishing Effective Public Notices; EPA’s 2002 Superfund Community Involvement Toolkit; EPA’s 2002 Superfund Community Involvement Handbook; and EPA’s FY 2006/2007 Superfund Program Implementation Manual. We performed our work from August 2005 to October 2007 in accordance with generally accepted government auditing standards. The following are GAO’s comments on the Environmental Protection Agency’s letter dated September 21, 2007. 1. GAO does note in the report that the Great Falls site in Region 8 involved a single residence where a former worker at a facility that processed Libby ore had taken contaminated waste product from the plant to his residence to resurface his driveway. While GAO acknowledges that the privacy of the homeowner should be considered, providing the public with information about such contamination could alert others who also used the waste ore for similar purposes on their properties. Our review of information on EPA’s evaluations of sites that received Libby ore revealed that one of the primary concerns was whether former employees or the general public took asbestos-contaminated waste ore from the sites to use in their gardens or to landscape their properties. Indeed a review of EPA documentation related to the plant in Great Falls where the homeowner worked indicates that another former employee interviewed by EPA stated that some people requested and were allowed to take dust left over from the processing of the ore to use in their gardens. While GAO acknowledges that it is important to consider community concerns in deciding the extent of public notification needed during site cleanup, for the sites that received Libby ore, widely disseminating information about these sites to the general public could help identify former workers and others that could have been exposed in the past to the asbestos in the ore. These people, in turn, could provide valuable information that could help EPA in identifying contaminated areas that need to be cleaned up, such as where waste rock was dumped. 2. We did not make the suggested change. The statement is factually accurate and we already note the discretionary nature of the relevant NCP provisions in the report. Region 5’s reasoning for not holding public comment periods for these sites is also reflected in the report. 3. We changed the text to read, “Thus, even though the Libby mine closed around 1990, many residents, former workers, and others who were exposed to the asbestos-contaminated ore recently have been diagnosed with asbestos-related diseases and many more may become ill in the future.” 4. We changed text to read, “Between 1980 and 1982, EPA issued a series of reports related to asbestos-contaminated vermiculite. Most of these reports indicated that there was a lack of data on both exposure to asbestos-contaminated vermiculite and its adverse health effects. Further, the reports identified problems in sampling, analysis, and reproducibility of data regarding low levels of asbestos in vermiculite, which made it difficult to acquire data on exposure and health effects.” 5. We added a footnote stating that EPA cited and fined W.R. Grace in the early 1990s for failure to submit relevant information under the Toxic Substances Control Act. 6. We changed the text throughout the report as appropriate to clarify that EPA is cleaning up properties in the Libby area. 7. We revised the report to include the following statements: “As part of an ongoing criminal case against W.R. Grace, the government has alleged that Grace engaged in a conspiracy to defraud EPA and the National Institute for Occupational Safety and Health by concealing and misrepresenting the nature of the asbestos-containing vermiculite produced at the mine. Grace has denied the allegations.” 8. Under the NCP, a removal site evaluation includes a removal preliminary assessment and, if warranted, a removal site inspection. 40 C.F.R. § 300.410(a). A preliminary assessment includes, among other things, an “evaluation of factors necessary to make the determination of whether a removal is necessary.” 40 C.F.R. § 300.410(c)(1)(iv). We now use this language in the report. 9. We changed the language to read, “These 195 sites are believed to have received a combined total of at least 6 million tons of ore from the Libby, Montana mine and ore processing operations.” 10. We changed the text throughout the report as appropriate to clarify that the 1 percent asbestos standard is based on the percentage of the area of a microscopic field. 11. We did not make any changes as a result of this comment because the report already includes a discussion of this memorandum. 12. EPA had previously indicated to us that that the Removal Evaluation Report was pending for this site. We interpreted this as meaning that the final decision had not been made. The report has been updated to indicate that a final assessment decision has been made for the site in Brutus, New York, and the Region is drafting the report to document this decision. 13. We made the change suggested by EPA. 14. We clarified the language to read, “Since the plants no longer process Libby ore, current residents living around the sites are no longer being exposed through air emissions from processing activities at the plants.” 15. We did not make any changes based on this comment because it was for additional information and clarification and was not intended to suggest a specific change to the report. 16. The sentence is now complete. It reads “After conducting additional sampling at one of these sites, EPA determined the site required further cleanup.” 17. We did not make any changes based on this comment because the text is part of a footnote. 18. We changed the text to read, “Detailed work plans for five of these studies have been completed with consultation from other agencies and external peer reviewers. Two other studies are continuations of ongoing efforts. Detailed work plans for the remaining five studies are currently being finalized. All studies are scheduled to be completed by the end of calendar year 2009. The milestone date for completing the baseline risk assessment, including the comprehensive toxicity assessment, is the end of fiscal year 2010.” 19. No change was made; the text in the bullet already contains the phrase “as appropriate”. 20. We did not make the suggested change. The statement is factually accurate and we already note the discretionary nature of the relevant NCP provisions in the report. 21. We made the change suggested by EPA. 22. We did not make the suggested change. We already noted the discretionary nature of the relevant NCP provisions in the report. 23. We revised the text to read, “and at the Wilder site, the notice was placed 6 days after the deadline.” 24. We made the changes suggested by EPA. 25. We did not make the suggested change. In our correspondence with EPA about the Honolulu site, EPA indicated that the Hawaii Department of Health was involved in the cleanup. We contacted the Hawaii Department of Health and were directed to officials identified as being knowledgeable about the cleanup. The views expressed in the report are those of the officials we were directed to. During our interview with these officials, they stated the state OSC did do a drive by of the site before the cleanup began, but said the state was not involved around the time of the removal. 26. We did not make the suggested change. In the case of the Great Falls site, we called the Montana Department of Environmental Quality and asked to speak to the state staff EPA said were involved with this site. We were directed to another person identified as being knowledgeable about the cleanup. The views expressed in the report are those of the official we were directed to. In response to EPA’s comment, we tried to contact the two staff named by EPA again. One person was no longer working for the Montana Department of Environmental Quality and the other person said the official that we spoke to originally was the main contact for that site and that he had nothing to add to the information we already had about the site. 27. We sent a copy of EPA’s comments to the New Jersey Department of Environmental Protection (NJDEP) for their review. These officials responded that they agree with GAO’s summary of NJDEP’s comments as presented in the report. They further stated that concerning the Phase I removal action at the Hamilton Township site, NJDEP continues to maintain that EPA’s notice to NJDEP of the Phase I removal action at the site could have been better. The officials said the March 24, 2000, meeting referred to in EPA’s comments was a regularly scheduled, biannual meeting between NJDEP’s Emergency Response Bureau and EPA’s response unit to discuss general removal activities and to coordinate the activities of the Region 2 states (New York and New Jersey) with those of the EPA. NJDEP officials said the attendees at this meeting remember a short discussion about the probability that the vermiculite ore from Libby, Montana, contained asbestos and that this ore was shipped throughout the United States, but none of the attendees construed this as official notification to NJDEP of asbestos contamination at the Hamilton Township site. NJDEP added that the “inventory of sites” and the “Agency Statement on Vermiculite Facility List” sent by EPA to NJDEP following the March 2000 meeting specifically stated that the “list is evolving and is subject to change as more information becomes available; therefore, EPA cannot verify the accuracy of this list.” NJDEP did not view these documents as any kind of official notification of a clean up action to be undertaken at the Hamilton Township site. NJDEP reiterated that it first learned of the proposed removal action at the Hamilton Township site not in 2000, but rather only when it was copied on a November 6, 2002 Action Memorandum. It subsequently was copied on two Pollution Reports, dated January 30, 2004, and February 27, 2004, but did not learn that the removal action was completed until March 2005, when NJDEP attended a stakeholder meeting. During the time of the Phase I removal action, NJDEP said that it does not dispute that EPA communicated with Janet Smolenski of NJDEP by copying her on the two Pollution Reports referenced above and by general telephone conversation(s) with Jim Daloia of EPA. The officials said there are no other records in NJDEP’s files to indicate that EPA sent any additional Pollution Reports to NJDEP, nor are there records of the specific telephone conversations held. 28. We did not make the change suggested by EPA. In the case of the Wilder site, we called the city of Wilder and asked to speak to the staff with the most knowledge about the cleanup. This person was also listed as a city contact in EPA’s community-relations plan for the Wilder site. The views expressed in the report are those of that official. 29. We did not make the first change suggested by EPA. In the case of the Great Falls site, we called the city of Great Falls and were directed to a person identified as being knowledgeable about the cleanup. The views expressed in the report are those of the official to whom we were directed. Regarding the contractor licensing issue raised by a city official, we noted the information that EPA provided in a footnote. 30. Table 2 of the draft report already indicates that, for the Minot site, EPA distributed fact sheets, held a public meeting, and went door-to- door to discuss the removal action. The views presented in the report are those of residents who lived within a half-mile of the Minot site. In fact, as pointed out in the report, one focus group participant’s backyard bordered the cleanup site. GAO cannot explain why EPA’s public-notification efforts apparently failed to reach the participants in the focus group. 31. Focus-group participants were asked if they had heard that EPA was cleaning up the sites before the cleanup started, including receiving any fliers from EPA, hearing about any public meetings sponsored by EPA, or seeing any EPA officials walking around their neighborhoods. For the Minot site, the views presented in this report are those of residents who lived within a half-mile of the site. In fact, as pointed out in the report, one focus group participant’s backyard bordered the cleanup site. GAO cannot explain why EPA’s public notification efforts apparently failed to reach the participants in the focus group. 32. We clarified the language to avoid any inference that sites that were cleaned up to non-detectable levels still pose a risk. In addition to the individual named above, Steve Elstein, Erin Lansburgh, David Stikkers, and Lisa Turner made key contributions to this report. Also contributing to the report were Richard Johnson, Jeremy Manion, Stuart Ryba, Stephanie Sand, Carol Shulman, and Monica Wolford.
Between 1923 and the early 1990s, a mine near Libby, Montana, shipped millions of tons of asbestos-contaminated vermiculite ore to sites throughout the United States. In 2000, the Environmental Protection Agency (EPA) began to clean up asbestos contamination at the Libby mine and evaluate those sites that received the ore to determine if they were contaminated. Under Superfund program regulations and guidance, EPA regional offices took steps to inform affected communities of contamination problems and agency efforts to address them. GAO was asked to (1) describe the status of EPA's and other federal agencies' efforts to assess and address potential risks at the facilities that received contaminated Libby ore and (2) determine the extent and effectiveness of EPA's public notification efforts about cleanups at sites that received Libby ore. GAO, among other steps, convened focus groups in three of the affected communities to address these issues. Since 2000, EPA has evaluated 271 sites thought to have received asbestos-contaminated ore from Libby, Montana, but did so without key information on safe exposure levels for asbestos. Based on these evaluations, 19 sites were found to be contaminated with asbestos from the Libby ore and needed to be cleaned up. EPA or the state of jurisdiction generally led or oversaw the cleanups. In general, a cleanup would be performed if sampling results indicated asbestos was present in amounts greater than 1 percent (based on the percentage area in a microscopic field) in soils or debris or greater than 0.1 asbestos fibers per cubic centimeter of air. However, these standards are not health-based and the Agency for Toxic Substances and Disease Registry found that the sampling and analysis methods EPA used at most of the sites it examined were limited and have since been improved. The EPA Office of Inspector General reported in December 2006 that EPA had not completed an assessment of the toxicity of the asbestos in the Libby ore. Until it completes this assessment, EPA cannot be assured that the Libby site itself is cleaned to safe levels, nor will it know the extent to which the sites that received Libby ore may need to be reevaluated. EPA has agreed to complete a risk and toxicity assessment by the end of fiscal year 2010. EPA regional offices did not implement key provisions of the agency's public notification regulations at 8 of the 13 sites for which EPA had lead responsibility. At four sites, for example, EPA either did not provide and maintain documentation about the cleanups for public review and comment or provide for a public comment period. Also, although EPA guidance emphasizes that simply complying with the public notification rules is often insufficient to meet communities' needs, at five sites EPA did not go beyond these provisions. Reaction among community members to EPA's public notification measures was mixed. At two of the three sites in which GAO held focus groups with affected community members, participants were critical of EPA's efforts to inform them about the cleanup of the asbestos-contaminated sites in their neighborhood. These included participants in Hamilton Township, New Jersey and Minot, North Dakota who noted that newspaper notices did not identify asbestos as the contaminant in question and contained unclear and bureaucratic language. On the other hand, participants in Dearborn, Michigan praised EPA efforts to, among other things, hold public meetings and hand-deliver written notices.
WIA authorizes the National Emergency Grant program and funds the program through its dislocated worker funding stream. This funding stream is one of three specified by WIA to fund services for its client groups—dislocated workers, youth, and adults. Dislocated workers include individuals who have been terminated or laid off, or who have received a notice of termination or layoff, individuals who were self- employed but are unemployed as a result of general economic conditions in the community or natural disasters, and unemployed or underemployed homemakers who are no longer supported by family members. Under WIA, the Secretary of Labor retains 20 percent of dislocated worker funds in a national reserve account to be used for national emergency grants, demonstrations, and technical assistance and allots the remaining funds to each of the states, local workforce boards, and other entities that demonstrate to the Secretary the capacity to respond to the circumstances relating to particular dislocations. Of the amount reserved by the Secretary in any program year, at least 85 percent of the Secretary’s national reserve must be used for national emergency grants (see fig. 1). During program year 2004, this amount was approximately $232 million and $110 million during the first 2 quarters of program year 2005, for a total of $342 million. National emergency grants expand WIA services that are available to dislocated workers when dislocated worker formula funds are insufficient to meet the needs of affected workers. Under WIA, dislocated workers can receive three levels of services—core, intensive, and training. Core services include job search and placement assistance, preliminary skill assessments, and the provision of labor market information, and are routinely available to anyone seeking assistance through a WIA service center. Dislocated workers who need additional services to find work can receive intensive services, such as case management and comprehensive assessments. In addition, dislocated workers may also qualify for training services, including occupational skills training, on-the-job training, skill upgrading, and entrepreneurial training. Typically, state workforce agencies apply for national emergency grants and distribute funds to local workforce boards in areas affected by the dislocations. These boards, in turn, usually contract with organizations that provide services or administrative support. Grantees can apply for grants that fund three major types of projects: regular grants to retrain workers who have lost their jobs because of plant closings, layoffs, or military base realignments or closures; disaster grants to provide temporary employment, humanitarian services, and retraining for workers affected by natural disasters and other catastrophic events; and dual enrollment grants to provide supportive assistance such as case management services and vocational assessments to workers certified by Labor to receive training under the Trade Adjustment Assistance Reform Act of 2002. These are usually for workers who have lost their jobs because of increased imports from, or shifts in production to, foreign countries. Like other programs authorized under WIA, national emergency grant projects must be designed to achieve performance outcomes that support Labor’s performance goals. Also, Labor requires grantees to collect data from local projects, certify the accuracy of the data, and use them to complete various reports, such as the quarterly progress reports for national emergency grants and the state’s WIASRD data submissions. Quarterly progress reports include project-level information on actual performance to date—for example, the number of individuals participating in a project; the services provided, such as intensive services or training; and the number who entered employment. WIASRD is a national database of individual records containing characteristics, activities, and outcome information for all enrolled participants who receive services or benefits under WIA, including national emergency grants. The database includes the services and training that each participant received and information on their subsequent employment status and wages. In coordination with federal agencies, the Office of Management and Budget developed uniform evaluation measures, called “common measures,” for job training and employment programs and other cross-cutting programs. The common measures were designed to institute uniform definitions for performance—such as the percentage of participants who become employed—across federal workforce programs. Beginning in July 2005, national emergency grant projects became subject to the common measures and Labor expected grantees to include them in its WIASRD data collection and reporting activities for program year 2004. In program year 2004, Labor funded a special type of grant, called a base realignment and closure (BRAC) planning grant. These grants provided resources to states and communities to plan for anticipated base closures, unlike other regular grants that provide more general employment-related services for dislocated workers. Accordingly, states that could be affected by BRAC actions were eligible to apply for national emergency grant funds. Labor issued guidance in May 2005 that explained the procedures for obtaining these grants. This guidance also specified that applicants must submit their applications by June 10, 2005. Labor’s Office of National Response, in the Employment and Training Administration (ETA), administers the National Emergency Grant program. Headquarters and regional staff share responsibility for program administration and oversight. At headquarters, officials make grant award decisions and determine whether grants will be awarded in a single payment or in increments. For grants disbursed in incremental payments, grantees are required to submit supplemental information along with their requests for future funding increments. Labor has established an internal goal of making these decisions within 30 working days. After grants are awarded, regional officials assume the lead role in conducting monitoring and oversight activities. For example, after the grant is approved, regional officials review and approve the project operating plan and budget, conduct at least one site visit that examines project activities, and review quarterly progress reports and financial reports. In program year 2004, Labor distributed about $232 million from the dislocated worker fund for national emergency grants to 43 states, the District of Columbia, and three territories (see fig. 2). The funding levels of these grants varied greatly. Labor awarded the largest proportion of funds to Florida—$76 million in grant funds, or 33 percent of the program’s total funds awarded during that year—mostly in the form of disaster grants to help the state respond to the needs of workers displaced as a result of hurricane damage. Ohio and California each received over $20 million in grants, primarily to help them meet the needs of workers displaced as a result of floods and storm disasters. Other states, such as Maine and Massachusetts, each received over $6 million, mainly to help them meet the needs of workers dislocated because of plant closings and downsizing, and Oregon received over $2 million, in part to help workers dislocated because of competition from foreign countries. Over the past 5 program years, Labor has awarded proportionally more of its national emergency grant funds for disaster grants and a smaller proportion for regular and dual-enrollment grants. In program year 2000, Labor awarded only 4 percent of its funds for disaster grants. In contrast, in program year 2004, Labor awarded about 57 percent of grant funds for disaster grants and 29 percent for regular grants (see fig. 3). For the first 2 quarters of program year 2005, Labor awarded 92 percent of all the funds it awarded during those quarters for disaster projects in 11 states— Arkansas, Alabama, Florida, Georgia, Kentucky, Louisiana, Mississippi, North Carolina, Oklahoma, Tennessee, and Texas—largely in response to damage and dislocations resulting from Hurricane Katrina. During these 2 quarters, Labor awarded about 8 percent for regular grants. Labor’s new electronic application system and the streamlined information requirements for national emergency grant applications have, on average, shortened the time it takes to award grants to 25 working days and helped Labor award 70 percent of the grants in program year 2004 within 30 working days from the submission of the application to the issuance of the award letter. However, regular national emergency grants (regular grants) took longer to award—45 working days on average—and most were not awarded within Labor’s 30-working-day goal. Moreover, Labor’s new system and its stated goal for awarding grants do not take into account important steps in the award process, such as obtaining approval from key Labor officials and issuing the award letter to the grantee. These steps added 11 days on average to the award process and thus hampered Labor’s ability to accurately evaluate its performance. Further, the steps that are excluded involve actions that are of great importance from the grantees’ perspective—the Secretary’s final approval and the award letter notifying them of the amount of money awarded. In addition, some users reported technical problems with the system that have affected its convenience and efficiency. During program years 2000-2002, Labor took 50 working days, on average, to award national emergency grants—as measured from the date an application is submitted until the date an award letter is issued. In program year 2004, Labor reduced its average award processing time to 25 days for all types of grants—decreasing the average processing time for regular grants from 63 to 45 days and for disaster grants from 34 to 16 days. Although Labor averaged 29 working days to award dual enrollment grants in 2004—longer than the 14 days it averaged during program years 2000-2002—most were awarded within its 30-day goal and these grants comprised less than 10 percent of the grants awarded that year (see fig. 4). Overall, Labor awarded 70 percent of all grants within 30 working days compared with 38 percent in program years 2000-2002. Also in program year 2004, Labor met this goal for 100 percent of the BRAC grants and for 91 percent of all disaster grants. In contrast, awards for regular grants took longer. Processing time for these grants averaged 45 working days, and Labor awarded only 16 percent of these grants within its 30-working-day goal (see fig. 5). The new electronic system has facilitated improvements in award processing time in three ways. First, because applicants cannot submit an application on this system without completing all required data fields, Labor no longer has to return incomplete applications. Second, because applications are electronic, submissions are nearly instantaneous and the format allows Labor and applicants to exchange information more efficiently than the former paper-based system. Third, under the new system, the applicants are only required to provide basic information— including project type, planned number of participants, planned support services, and the project operator. Grantees receiving regular grants have 90 days from the grant approval date to submit project operating plans, staffing plans, and budgets. In the case of disaster grants, grantees have 60 days from the grant approval date to submit the required information. Although average processing times have decreased in program year 2004, the time to award grants varied widely, ranging from 1 to 90 working days, with some types of grants taking longer than others. Several factors likely contributed to this variance. For example, several disaster grants were processed very quickly—within 1 to 2 days—because of the urgent need for funds in areas impacted by storms and flooding. Also, the 39 BRAC grants were awarded, on average, in only 14 days, reflecting the short period of time that was available to submit and process them. In order to be eligible for BRAC grants, states had to be included on the Department of Defense’s preliminary base realignment and closure list, issued on May 13, 2005, and also had to follow Labor’s special guidance for these grants that specified that applications were due by June 10. Because the funds for these grants were reserved from program year 2004 money, Labor had to award them by June 30, the end of that program year. In contrast, questions about the appropriateness of project applications delayed the approval of other grants. For one project we visited, officials reported that approval for an application to address a plant closure took 46 working days (about 2 months), largely because Labor questioned the amount of funds they requested and required them to prepare additional information to justify the costs. In addition, some grantees reported that delays in obtaining funds adversely impacted their ability to provide services, because individuals who needed employment left the affected area to search for work in other places or found other jobs instead of waiting for grant funds to become available. For example, one project we visited was only serving 20 of 50 eligible participants according to project officers, because workers could not afford to wait for services, left the area, or found other jobs. Labor’s award processing times were more consistent across quarters in program year 2004 than in program years 2000-2002. In program year 2004, the average number of working days that Labor took to award grants for the first 3 quarters ranged from 34 to 41 days, and only 16 days during the fourth quarter. In program years 2000-2002, the number of days to award grants during the first 3 quarters varied more widely—from 61 to 74 days (see table 1). In addition, in program year 2004, the quarter in which an application was awarded corresponded more closely to the quarter in which it was submitted. This is in contrast to program years 2000-2002, when most awards took place in the fourth quarter despite the fact that applications were received at a fairly steady rate during the last 3 quarters of the program year. During the first half of program year 2005, Labor awarded grants in 21 working days, on average, and 67 percent were awarded within 30 working days. The overwhelming majority of these grants were in response to Hurricane Katrina, many of which were awarded within a few working days. For example, Louisiana submitted its application on September 1, 2005, and Labor awarded the grant 1 day later. The quick approval time for most of the Katrina-related grants reflected the hurricane’s severity, the commitment of Labor officials to provide assistance as quickly as possible, and the ability of most grantees to submit streamlined, emergency applications. Under Labor’s regulations, grantees may file an abbreviated application to receive emergency funding within 15 days of an event that was declared a disaster by the Federal Emergency Management Agency. Grantees in the states that we visited had limited experience using the new electronic application system in requesting incremental payments. Moreover, Labor awarded only a relatively small number of increments in program year 2004 and the first half of program year 2005. Approximately 60 percent of grants were awarded in one payment during program year 2004 and about 75 percent during the first 2 quarters of program year 2005. Also, the period we examined was less than a year after most grantees had received their initial award and, therefore, most had not yet submitted applications for their next increments. Despite improvements in average award timeliness, Labor’s goal for awarding new grant applications and its electronic application system exclude important steps in the award process. More specifically, the time needed to obtain the Secretary’s approval and issue award notification letters to grantees is not captured by the system and is not counted as part of the 30-working-day goal. Labor’s electronic application system captures the time from the application submission date through the date that its Office of National Response (ONR) approves the grant application. However, from the grantees’ point of view, the actual process continues until the grant is reviewed and approved by the Secretary and an award letter is issued (see fig. 6). Our prior work also identified this problem with Labor’s measurement process. In program year 2004 and the first half of program year 2005, these steps added 11 working days, on average, to award processing times. For example, officials in one state we visited reported that, although they received verbal confirmation that an application was approved, the service provider would not begin services without formal assurance that they would be paid. Consequently, services were delayed for more than 2 weeks until the official award letter was received. Officials in all of the states we visited told us that, while their experience with the new electronic application system has generally been positive, they have encountered some technical problems. These included minor problems that made the application process more difficult, as well as more serious issues that forced grantees to submit applications outside of the system. Some states had problems resolving technical questions because, for example, the system would not allow users and Labor’s technical assistance staff to view the same screen and data simultaneously. Officials in one state described a series of delays they experienced in submitting an application because they could not view a discrepancy in how a zip code had been entered on two different screens. Officials in all four of the states we visited also reported more general problems, including a lack of flexibility when modifying an existing application. For example, officials in three states told us that they had to adjust data to fit the system—in one case, by adding up data on participants and services provided at different service centers and entering it as information from one service center—-and that the system did not allow them to report changes in plans accurately. Officials in one of these states told us they could not use the system for one of their applications because several of its required fields, such as the number of participants, did not apply to their application. In addition, officials that we interviewed in all four of our site visits reported that Labor has not systematically queried users for feedback on the problems faced while using the new electronic application system. Several officials felt that minor changes to the system, such as providing more room to explain unusual features of some projects and better directions regarding how to proceed from one screen to another, would make the system more efficient and easier to use. Labor has taken actions to improve its two sources of information on national emergency grants—the quarterly progress reports and WIASRD. In program year 2004, Labor implemented a new electronic quarterly progress report system and required all grantees to use this system beginning in January 2005. Since these actions have been taken, our analysis suggests that most grantees have generally submitted required quarterly progress reports to Labor electronically and have certified and reported the required data elements. In addition, in August 2004, Labor issued guidance that clarified that states were required to include information on national emergency grants as part of their submissions to WIASRD. In that year, 85 percent of all states that received national emergency grants submitted national emergency grant data to WIASRD. Labor’s new electronic quarterly progress report system has enhanced its ability to collect, review, and manage quarterly report information. More specifically, the new system requires grantees to submit data electronically, using a standard format in which all data fields are defined. As a result, the system has improved the uniformity and consistency of the progress report data Labor collects compared with our findings from program years 2000-2002. Labor also issued guidance specifying that beginning January 1, 2005, grantees would be required to submit the progress reports using the new system for all grants awarded after July 1, 2004 (the beginning of program year 2004). We found that during program year 2004, grantees generally submitted electronic quarterly progress reports as required. By contrast, for program years 2000-2002, we could not provide information on the extent that grantees provided quarterly progress reports because the reports were not collected on an integrated system and were not available electronically (see table 2). The new progress report system has also improved the completeness of the data that Labor collects. During program year 2004, we found that all grantees that were expected to provide data did in fact complete each of the required data fields. Under the new system, grantees must enter basic information—such as counts of participants, counts of services that are provided, and expenditures at the grantee and project level—before they can submit their reports electronically. By contrast, when we examined reports submitted for program years 2000-2002, we found that the quarterly report data were generally incomplete. For example, of 13 states for which we sampled progress report data, only about half reported the number enrolled in core and intensive services and just one reported expenditures by type of service (see table 3). After Labor issued its August 2004 guidance on data submission to WIASRD, the level of compliance with this requirement substantially increased. We found that 44 of the 48 states that likely fell under this requirement (90 percent) submitted data as required during program year 2004. Officials in all four states we visited reported that, overall, they did not encounter problems submitting data to WIASRD as required by Labor. In contrast, in program year 2001, only one of the six states that received the largest proportions of national emergency grant funds submitted data to WIASRD (see table 4). Although grantees are complying with the WIA data submission requirements, some questions about the reliability of these data remain. As we reported in November 2005, Labor requires states to validate the data it submits to WIASRD, but it does not have methods in place to review state validation efforts nor does it hold states accountable for complying with its data requirements. Labor’s regional offices oversee each project to track its performance and compliance with basic program rules and requirements, but several state and local officials we interviewed told us that more specific guidance is needed. Regional officials conduct a variety of monitoring activities, including approving program operating plans, reviewing quarterly progress reports, and conducting site visits. However, Labor has not issued complete, program-specific guidance that would standardize monitoring practices across regions, states, and local areas and help ensure consistent practices. As a result, some states have developed their own monitoring tools. In addition, officials in most of the states and local areas we visited said that Labor does not regularly help disseminate information about how states and local areas are managing and monitoring their national emergency grant projects. To ensure that projects effectively serve dislocated workers, Labor, states, and local areas carry out a variety of monitoring activities throughout the lifecycle of a project to track its progress toward meeting its stated purpose and goals. Labor officials in the four regions we visited told us that they follow the same general monitoring procedures for all grants, but tailor them as necessary for high-risk or complex grants. At the beginning of the grant, and at each quarter during the lifecycle of the grant, regional officials assess the potential risk level of the grant. They also review the project operating plan and analyze quarterly financial and progress reports to assess their timeliness, accuracy, and effectiveness in providing services to dislocated workers. Regional officials we interviewed told us they generally conduct their most comprehensive review at the project’s midpoint by visiting grantees and project operators. According to Labor’s guidance on administering the National Emergency Grant program, a major purpose of the on-site review for incremental grants is to review the need for funds to complete the project. The guidance also states that Labor officials will assess how well a grantee and its project operators are meeting the major requirements of the program. These include participant eligibility, financial management controls, project management, effectiveness of support services, and job placement services. For disaster grants, they also include temporary jobs for dislocated workers. The regional officials reported that they usually meet with state and local workforce officials, including project operators and dislocated workers enrolled in the project, and conduct an exit interview with cognizant officials to discuss their findings. According to officials in the four regions that we interviewed, Labor modifies monitoring and reporting procedures as necessary to ensure that these reviews are appropriate in terms of the special characteristics of some grants. For example, one regional official said they visit projects designated as “high risk” within 90 days after grant award, rather than waiting until the project’s midpoint, and work closely with the grantee throughout the project. Another Labor official said they monitor a grantee more closely if they identify potential problems. For example, if a grantee is late in submitting its quarterly progress reports or falls behind in enrolling dislocated workers in a project, the regional office will conduct more extensive monitoring such as telephoning the grantees or conducting additional site visits to determine the cause. Labor can also require grantees to submit reports in addition to their regular quarterly progress reports for unusually large or highly visible grants, such as those awarded to serve Hurricane Katrina victims. According to workforce officials in two states that received Katrina grants and a cognizant regional official, they initially had to submit numerous reports with different information on a daily basis, then every 3 days, and then on a weekly, biweekly, and monthly basis. Workforce officials in most of the states and local areas we visited told us that Labor’s oversight activities were generally beneficial and that the monitoring activities often provided them with helpful feedback for managing their grants. For example, an official in one local area said that Labor’s monitoring resulted in them strengthening their requirements for maintaining critical documents in participant files. In this area, local officials routinely required caseworkers to check a sample of their coworkers’ files to ensure that they were complete and contained sufficient documentation to justify the services that were provided in order to prepare for federal monitoring. Despite the general satisfaction with Labor’s monitoring efforts, we found that the guidance states and local areas received varied widely. Labor issued a draft monitoring guide specifically for national emergency grants in late 2005, which was based on its generic Core Monitoring Guide. Officials in three of the four states we visited said that they had received a copy of the draft monitoring guide, but none of the local areas we visited had. In fact, one regional office official we interviewed had not yet received a copy of the guide. Further, state officials told us that they had received different types of information from Labor to help them prepare for their on-site monitoring visits. For example, officials in all four states said that regional officials sent a list before their visit of the documents and participant files that they needed to review. An official in one state said that they had not always received written guidance on how to conduct their own monitoring or prepare for Labor’s monitoring visits. To compensate for the lack of consistent, complete guidance, all four states we visited had developed their own tools for monitoring local areas, and many of the local areas used their state’s tool or a modified version of it to monitor their service providers. For example, one local area official told us they modified the state’s tool by adding procedures for reviewing the documents that support a dislocated worker’s eligibility to receive services. An official in another state told us that their agency expanded the tool that it uses for its 90-day on-site monitoring visit for its mid-point review by including a review of the documents in participant files and project cost. Officials in most states and local areas we visited said they do not currently have opportunities to share information about promising practices for managing and monitoring national emergency grant projects, but many expressed an interest in having such opportunities. Workforce officials in one state and six local areas said that having Labor facilitate opportunities for disseminating such information would help project operators manage their projects more efficiently. For example, according to officials in one local area, they had experience operating grant projects that served dislocated workers in the agricultural sector, but not in the manufacturing sector. When faced with a layoff in a computer chip manufacturing plant, they had to take time to research potential job openings and skills required for jobs in this sector. Having information on how other areas served workers laid off by manufacturing companies would have helped shorten the time they spent developing the project and allowed them to serve workers more quickly. Officials in one state also suggested that Labor could help by creating a central repository of documents used in managing projects, such as examples of agreements used to establish temporary worksites for disaster victims. Although Labor has a Web site for sharing promising practices with the WIA community, Labor has not used this tool for facilitating improved information sharing about national emergency grants. National emergency grants are an important tool for helping states and localities respond to mass layoffs and disasters that result in large numbers of dislocated workers. When major layoffs and disasters such as hurricanes or floods occur, states and local areas must respond quickly to ensure that dislocated workers receive the services they need to re-enter the workforce. While the National Emergency Grant program is relatively small, the reemployment activities it funds are important for workers who have been dislocated due to mass layoffs or natural disasters. In this regard, it is critical for grant funds to reach program participants in a timely manner. By implementing an automated application system, Labor has, on average, substantially decreased the time required to award national emergency grants. However, because this system does not capture the entire grant process—including the time taken for the Secretary to issue final award letters—there is room for further improvement. Moreover, while the system has improved the timeliness of grant awards, some state and local officials have encountered problems using the system. Effective management and oversight requires a mechanism for states and localities to provide feedback to Labor, to ensure that potential system weaknesses are identified and addressed. Effective monitoring is also a critical component of grant management. While Labor’s monitoring activities appear to provide reasonable assurance that grant funds are being used for their intended purpose, some state and local officials said that standardized guidance would be beneficial. In particular, once Labor finalizes its monitoring guide for grants, state and local officials responsible for grant administration and oversight could benefit from more consistent, specific federal guidance. Moreover, state and local officials could benefit from innovative project management practices that have promoted efficiency and effectiveness in other states where grant funds have been awarded. However, without disseminating such information through a centralized mechanism, it is difficult for state and local officials to learn of promising practices in other jurisdictions and use this information early in the planning process. In order for Labor to better manage the grant award process and system, accurately assess the time it takes to award grant funds, and improve its guidance to states and local areas, we recommend that the Secretary of Labor take additional actions. In particular, Labor should extend its electronic application system and its own timeliness measurement process to capture the entirety of the award process from the perspective of grant applicants, specifically through final approval and issuance of award letters by the Secretary; solicit information from users of the application system to guide future refinements to this system; distribute more complete guidance and tools for monitoring grant explore cost-effective ways to disseminate information to states and local areas to help them learn about promising practices for managing national grant projects. The Department of Labor commented on a draft of this report, indicating that it agrees with our findings and the intent of all four recommendations (see app. III). Labor’s comments also highlighted some actions that it has already taken or plans to take. Labor reported that it has recently implemented a new version of its electronic application system that has expanded its capacity to manage all elements of the application process. However, Labor did not directly address our recommendation that the system be expanded to capture the entirety of the awards process, including final approval and issuance of the award letters by the Secretary. In addition, Labor agreed that information from users is needed to guide future refinements to the system but noted that a survey of all users might require a formal paperwork clearance process and, therefore, would provide less timely information than its present system involving user tests with selected grantees. While we agree that information from user tests is useful, we believe feedback from all grantees would better inform future enhancements. Regarding our recommendation that it distribute more complete guidance and monitoring tools, Labor explained that it is currently field-testing a monitoring guide for national emergency grant projects, and plans to release this guide by September 2006. We believe such a guide could be an important step toward establishing consistent monitoring practices. Also, Labor concurred with our recommendation that it explore cost-effective ways to disseminate information to states and local areas to help them learn about promising practices for managing national emergency grant projects. In particular, Labor noted that it has relied upon venues such as national conferences and forums to facilitate the sharing of information among grantees. Labor did not provide technical comments on the draft. We are sending copies of this report to the Secretary of Labor and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s home page at http://www.gao.gov. Please contact me on 202-512-7215 or at [email protected] if you or members of your staff have any questions about this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Our objectives were to (1) determine whether Labor has shortened grant award times since our 2004 report and has been able to meet its own goal of 30 working days for awarding grants, (2) examine the uniformity of the program data that Labor is currently collecting, and (3) assess Labor’s monitoring and oversight of national emergency grant projects. To examine how long it takes Labor to award national emergency grants and determine whether Labor is meeting its 30-working-day timeliness goal, we obtained a listing from Labor of all grants awarded during program year 2004 and the first 2 quarters of program year 2005. We selected this time period because Labor implemented its new electronic application system and streamlined application data requirements at the beginning of program year 2004. We computed (1) the number of working days between the date of the original grant application and the date of the award letter to determine overall grant award times and award times by type of grant and (2) the percentage of grants that were awarded within Labor’s timeliness goal of 30 working days. We supplemented data from Labor’s electronic database with data from its hard copy grant files, including information contained in the award letters for all grants awarded during program year 2004, because the application system did not contain data for all steps in the awards process. We excluded two grants because they were not submitted electronically. In order to compare the award processing times for program year 2004 with program years 2000-2002, we converted calendar days to working days because Labor’s present day goal is expressed in working days. To determine the degree that grantees submitted quarterly progress reports with the required data elements, we analyzed the extent that grantees submitted quarterly progress reports by quarter and the extent that grantees completed required data fields during program year 2004. We eliminated the BRAC planning grants from these analyses because quarterly report data were designed to capture information on participants and services, not planning activities. We compared the completeness of data submitted during program year 2004 with the completeness of data submitted during program years 2000-2002. To assess the extent that grantees complied with requirements to summit data to the WIASRD database, we identified states that received national emergency grants in programs years 2002, 2003, and 2004, and, therefore, were likely to have participants that left projects in program year 2004. We examined the WIASRD database to see if it contained program year 2004 data for these states. We compared the percentage of grantees that provided national emergency grant data to the database in 2004 with the percentage that provided data in 2000, based on the sample of grantees that were selected for our previous analysis in 2004. To assess the reliability of data about award processing times, we interviewed officials responsible for compiling these data. We verified the accuracy of the application dates that Labor gave us by comparing them with dates on the actual applications and dates on the electronic application system. Also, we drew a 10-percent random sample of all grants awarded in program year 2004 and the first 2 quarters of 2005 and verified information in the electronic system with information in the official hard copy grant files. To assess the reliability of information in the electronic quarterly progress report system, we examined materials related to data entry and examined the completeness of data submissions. Also, we interviewed state and local officials regarding their data collection procedures and verification processes. We determined the data were sufficiently reliable for the purposes of our report. We interviewed officials in the Office of National Response and the Office of Grant and Contract Management in Labor’s Employment and Training Administration to obtain information on application processing, program polices, and grants management. We also interviewed key staff in the Office of Field Operations and officials in four regional offices where we conducted site visits. They are in charge of monitoring and oversight to obtain information on data reporting, oversight requirements, and monitoring procedures. In addition, we interviewed officials representing Labor’s contractor to obtain technical information on the electronic application system. To learn more about the application system, data requirements, and oversight from the grantees’, service providers’, and dislocated workers’ points of view, we conducted site visits to four states—Florida, Maine, Oregon, and Texas. We selected these states because they each received a substantial amount of national emergency grant funding and, together, represented different geographical regions, had received a diversified mix of regular, BRAC, disaster, and dual enrollment grants. (See table 5.) On these site visits, we conducted in-depth interviews with state workforce officials, representatives of local workforce investment boards, and service providers. In addition, we visited four work sites that provided temporary employment to individuals who had lost their jobs as a result of Hurricanes Katrina and Wilma. Our work was conducted between September 2005 and July 2006 in accordance with generally accepted government auditing standards. Jeremy D. Cox, Assistant Director; Kathleen D. White, Analyst-in-Charge; Carolyn S. Blocker; and Daniel C. Cain served as team members and made major contributions to all aspects of this report. In addition, Catherine Hurley and Jean McSween advised on methodological and analytic aspects of this report; Susan Bernstein advised on report preparation; Jessica Botsford advised on legal issues; Yunsian Tai helped conduct data analyses; Robert Alarapon provided graphic design assistance; and Katharine Leavitt verified our findings. Trade Adjustment Assistance: Labor Should Take Action to Ensure Performance Data Are Complete, Accurate, and Accessible. GAO-06-496. Washington, D.C.: April 25, 2006. Trade Adjustment Assistance: Most Workers in Five Layoffs Received Services, but Better Outreach Needed on New Benefits. GAO-06-43. Washington, D.C.: January 31, 2006. Workforce Investment Act: Labor and States Have Taken Actions to Improve Data Quality, but Additional Steps Are Needed. GAO-06-82. Washington, D.C.: November 14, 2005. Workforce Investment Act: Substantial Funds Are Used for Training, but Little Is Known Nationally about Training Outcomes. GAO-05-650. Washington, D.C.: June 29, 2005. Unemployment Insurance: Better Data Needed to Assess Reemployment Services to Claimants. GAO-05-413. Washington, D.C.: June 24, 2005. Workforce Investment Act: Labor Should Consider Alternative Approaches to Implement New Performance and Reporting Requirements. GAO-05-539. Washington, D.C.: May 27, 2005. Trade Adjustment Assistance: Reforms Have Accelerated Training Enrollment, but Implementation Challenges Remain. GAO-04-1012. Washington, D.C.: September 22, 2004. Workforce Investment Act: States and Local Areas Have Developed Strategies to Assess Performance, but Labor Could Do More to Help. GAO-04-657. Washington, D.C.: June 1, 2004. National Emergency Grants: Labor Is Instituting Changes to Improve Award Process, but Further Actions Are Required to Expedite Grant Awards and Improve Data. GAO-04-496. Washington, D.C.: April 16, 2004. National Emergency Grants: Services to Workers Hampered by Delays in Grant Awards, but Labor Is Initiating Steps Actions to Improve Grant Award Process. GAO-04-222. Washington, D.C.: November 14, 2003. Workforce Investment Act: Improvements Needed in Performance Measures to Provide a More Accurate Picture of WIA’s Effectiveness. GAO-02-275. Washington, D.C.: February 1, 2002. Trade Adjustment Assistance: Experiences of Six Trade-Impacted Communities. GAO-01-838. Washington, D.C.: August 24, 2001. Trade Adjustment Assistance: Trends, Outcomes, and Management Issues in Dislocated Worker Programs. GAO-01-59. Washington, D.C.: October 13, 2000.
Between January 2004 and December 2005, more than 30,000 mass layoffs involving 50 or more workers occurred in the United States, causing more than 3.4 million workers to lose their jobs. National emergency grants expand services to laid-off workers when other state and federal programs are insufficient to meet their needs. GAO assessed (1) whether Labor has shortened grant award times since GAO's 2004 report and was meeting own timeliness goal, (2) the uniformity of the program data that Labor now collects, and (3) Labor's oversight of national emergency grant projects. To address these objectives, GAO analyzed information for program year 2004 and the first 2 quarters of 2005 and compared it with data collected for program years 2000- 2002. We found that Labor's new electronic application system has, on average, shortened award processing time and most national emergency grants were awarded within Labor's goal of 30 working days as measured by GAO--from the time the application is submitted to the issuance of the award letter. In program year 2004, Labor averaged 25 working days to award grants, in contrast to program years 2000-2002, when it averaged 50 working days. Moreover, in program year 2004, Labor awarded 70 percent of all grants within 30 working days, in contrast to 38 percent for program years 2000- 2002. Although Labor has improved the overall timeliness for awards, award times ranged from 1 to 90 working days and varied by type of grant. For example, disaster grants were awarded, on average, in 16 days, but regular grants were awarded, on average, in 45 days. Delays in obtaining funds adversely impacted some grantees' ability to provide services. Also, we found that Labor's electronic application system and its timeliness goal did not capture every phase of the award process. In addition, users of this system reported some technical problems. Labor has taken steps to improve its two main sources of data for assessing how grant funds are used--the quarterly progress reports and the Workforce Investment Act Standardized Record Data (WIASRD) database. Labor introduced a new electronic quarterly report system in program year 2004. Since then, grantees have generally been submitting uniform and consistent information. Also, our review of available WIASRD data for program year 2004 shows that at least 92 percent of states that received national emergency grants included information on these grants in their WIASRD submissions. Labor's regional offices oversee each project to track performance and compliance with program requirements by conducting various monitoring activities, including approving program operating plans, reviewing quarterly progress reports, and conducting site visits. However, Labor has not issued complete, program-specific guidance that would standardize monitoring practices across regions, states, and local areas and help ensure consistent practices. In addition, officials in most of the states and local areas we visited said that Labor does not regularly help disseminate information about how states and local areas are managing their national emergency grant projects.
NOAA’s Office of Coast Survey provides navigational services intended to ensure the safe and efficient passage of maritime commerce through oceans and coastal waters within U.S. jurisdiction, and in the Great Lakes. In this capacity, the Office of Coast Survey develops, updates, and maintains more than 1,000 nautical charts—maps used for navigating waterways—containing information about water depth, the shape of the water body floor and coastline, the location of possible obstructions, and other physical features within these water bodies. According to NOAA documentation, nautical charts provide information critical to safe navigation, such as symbols that inform ship captains or recreational boaters if an area is shallow or has dangerous conditions that could imperil navigation. Hydrography is the science that informs the surveying methods for collecting the data used to create and update nautical charts. In addition, information collected through hydrographic surveying supports a variety of maritime functions such as port and harbor maintenance, beach erosion and replenishment studies, management of coastal areas, and offshore resource development. NOAA operates four ships that predominantly support hydrographic surveys: the Fairweather, Ferdinand R. Hassler, Rainier, and Thomas Jefferson (see fig.1). The Hassler, commissioned in 2012, is the newest of the four vessels. NOAA also procures and oversees hydrographic surveying and related services from the private sector. NOAA officials said the congressional committee reports accompanying NOAA’s appropriations acts for fiscal years 2007 through 2016 provided about $342 million of the agency’s appropriation for the Hydrographic Survey Priorities/Contracts budget line item. The most recent contracts were awarded in June 2014 to eight hydrographic survey companies for a 5-year period and are valued at up to $250 million over this contract period based on NOAA documents. In addition, according to NOAA officials, NOAA works with other federal agencies to collect hydrographic survey data. For example, the U.S. Army Corps of Engineers provides such data for the federal harbor waterways that support the U.S. port system. NOAA primarily uses two kinds of sonar for hydrographic surveying— multibeam and side scan. Multibeam sonar measures the depth of the water by analyzing the time it takes sound waves to travel from a vessel to the bottom of the water body and back and provides detailed information about the water body floor. Multibeam sonar is generally used in areas such as the northeast United States and Alaska, where the water body floor is complex and often strewn with rocks. See figure 2 for an illustration of a NOAA ship using multibeam sonar. In contrast, in relatively shallow flat areas like those along the mid-Atlantic coast, NOAA uses side scan sonar. Side scan sonar creates an image of the water body floor but does not determine depths. If NOAA finds a shipwreck or obstruction using side scan sonar, it will determine its depth using multibeam sonar. See figure 3 for an illustration of a NOAA ship using side scan sonar. NOAA’s National Ocean Service is responsible for providing data, tools, and services that support mapping, charting, and maritime transportation activities, among other things. Within the National Ocean Service, the Office of Coast Survey directs the agency’s hydrographic surveying operations. In particular, it develops survey specifications, evaluates new technologies, and implements procedures for acquiring hydrographic survey data, processing the data, and producing nautical charts. Within the Office of Coast Survey, the Hydrographic Surveys Division is responsible for planning, managing, and supporting hydrographic surveying operations. This includes compiling, verifying, and certifying hydrographic data, as well as determining hydrographic survey priorities and issuing an annual hydrographic survey prioritization report. The Hydrographic Surveys Division coordinates with NOAA’s Office of Marine and Aviation Operations to plan and schedule NOAA vessels for hydrographic surveying. The Office of Marine and Aviation Operations manages, operates, and maintains NOAA’s fleet of 16 ships, including the 4 ships that predominantly support hydrographic surveying. According to NOAA officials, during fiscal years 2007 through 2016, NOAA expended about $303 million for its in-house hydrographic survey program. The Hydrographic Surveys Division also works with the Hydrographic Services Review Panel, an external committee that advises NOAA on matters related to hydrographic services, including surveying. The review panel, which was required by the Hydrographic Services Improvement Act Amendments of 2002, is composed of 15 voting members appointed by the NOAA Administrator as well as several NOAA employees who are nonvoting members. Voting members must be especially qualified in one or more disciplines relating to hydrographic data and services, vessel pilotage, port administration, coastal management, fisheries management, marine transportation, and other disciplines as determined appropriate by the NOAA Administrator. The NOAA Administrator is required to solicit nominations for panel membership at least once a year; voting members serve a 4-year term, and may be appointed to one additional term. The Director of the Office of Coast Survey serves as the designated federal officer. NOAA’s standards for hydrographic surveying are contained in a technical specifications document known as the Hydrographic Surveys Specifications and Deliverables. The document is updated annually by NOAA hydrographers and, according to NOAA officials, is also the standard on which many other hydrographic survey entities base their hydrographic surveying requirements. In addition, NOAA maintains a quality assurance program for all hydrographic survey data submitted by the private sector and NOAA hydrographers. The quality assurance program includes three main review procedures intended to ensure that hydrographic data submitted to NOAA meet quality standards: the Rapid Survey Assessment, Survey Acceptance Review, and Final Survey Review. See appendix I for additional information about NOAA’s data quality standards and review process. NOAA uses a three-step process to determine its hydrographic survey priorities. In addition, in an effort to improve its priority setting, NOAA is developing a model to better assess hydrographic risks to ships. According to NOAA’s standard operating procedure and NOAA officials, NOAA uses a three-step process to determine its hydrographic survey priorities. Under this process, NOAA (1) identifies the areas in greatest need of surveying, (2) evaluates resources, including funding and vessel availability, and (3) develops an annual hydrographic surveying plan, which identifies the resulting hydrographic survey priorities. The plan specifies the locations, vessels, and schedules for NOAA hydrographic survey projects and the locations and time frames for private sector hydrographic survey projects. (See fig. 4.) NOAA first identifies the areas the agency considers to be in the greatest need of a hydrographic survey, using an approach it developed in 1994 called NOAA Hydrographic Survey Priorities, according to NOAA’s standard operating procedure and NOAA officials. NOAA identifies areas of “navigational significance” based on depth, draft of ships, and potential for dangers to marine navigation. NOAA then determines which of these navigationally significant areas are in greatest need of surveying by considering (1) shipping tonnage and trends, (2) age and quality of surveys in the area, (3) seafloor depth, (4) potential for unknown dangers to navigation due to environmental or human influences, and (5) requests for surveys from stakeholders such as pilot associations and the U.S. Coast Guard, and requests received through NOAA’s regional navigation managers. Through this process, NOAA designates high-priority areas in any of four categories: Critical areas. Areas that NOAA identified in 1994 as experiencing such circumstances as high shipping traffic or hazardous material transport or having a significant number of survey requests from users. Emerging critical areas. Areas in the Gulf of Mexico and Alaska that NOAA identified after 1994 that met the critical area definition but that NOAA chose to designate in a separate category from the 1994 critical areas for tracking purposes. Resurvey areas. Areas that NOAA identified as requiring recurring surveys because of changes to seafloors, use by vessel traffic, or other reasons. Priority 1-5 areas. Areas that do not fall into any of the three categories above are subdivided into five priority areas based on the date of the most recent survey and the level of usage by vessels. Until 2012, according to NOAA’s standard operating procedure, NOAA used the results of its approach for identifying areas most in need of surveying to publish annual hydrographic survey prioritization reports—a component of the overall hydrographic surveying plan. However, NOAA officials said they found this approach increasingly outdated because it did not reflect changing ocean and shipping conditions or take advantage of available technology. These officials said they are in the process of developing a new methodology (described later in this report) to help identify areas that need surveys. According to NOAA officials, they have continued to update computerized mapping files and reports related to hydrographic survey priorities since 2012 but have not published new hydrographic survey prioritization reports. However, these officials said they will provide information to the public upon request. According to NOAA’s standard operating procedure and NOAA officials, once NOAA identifies its highest priority areas, the agency compares its priorities to those identified by external stakeholders through NOAA’s navigation managers and its Integrated Ocean and Coastal Mapping program. NOAA officials said this input helps them understand potential economic and safety issues, among other things, that may affect hydrographic survey priorities. NOAA officials said they look to find areas of intersection between areas identified through the NOAA Hydrographic Survey Priorities process and those compiled by NOAA’s navigation managers and external stakeholders. NOAA’s standard operating procedure states that when determining which areas to survey, NOAA generally gives precedence to survey areas identified through the NOAA Hydrographic Survey Priorities process, but stakeholder input may shape survey priorities in unusual cases, such as when hurricane-related requests indicate the need for an immediate resurvey. According to NOAA’s standard operating procedure and NOAA officials, NOAA estimates the amount of funds it expects to be available to conduct surveys and develops a preliminary survey plan that seeks to maximize in-house and contractor resources. Once funds are appropriated, NOAA modifies its preliminary plan to reflect the amounts available for NOAA fleet operations and survey contracting. NOAA also evaluates survey requirements and in-house and contractor ship availability and capability. As NOAA obligates funds for in-house surveys and for contracts, it refines and finalizes the actual amount of surveying to be conducted by both in- house and contractor hydrographers. According to NOAA’s standard operating procedure and NOAA officials, based on an evaluation of the identified hydrographic survey needs, available funding, and vessel availability and capability, NOAA develops a hydrographic surveying plan for the coming year. NOAA evaluates the mix of available NOAA and private sector vessels to meet the highest- ranked survey needs with available funding. NOAA also engages offices within NOAA to coordinate hydrographic survey ship schedules to accommodate other agency projects and plans. For example, NOAA officials said they may use hydrographic survey ships to accommodate the testing of new types of equipment, such as unmanned surface vehicles. Once the surveying plan is developed, it is submitted to the Chief of the Hydrographic Surveys Division for approval, according to NOAA’s standard operating procedure. When we began our review, NOAA officials told us they did not have written procedures documenting how the Hydrographic Surveys Division is to develop its annual hydrographic surveying plan. In response to our review, NOAA issued a standard operating procedure in September 2016 documenting how the division is to develop the plan. NOAA is developing a model intended to better assess hydrographic risks as part of its effort to identify areas most in need of hydrographic surveys—the first step in NOAA’s process for creating the hydrographic surveying plan. According to NOAA officials, the model is aimed at addressing several limitations they found with the agency’s existing approach for identifying areas most in need of surveys. For example, they said the existing approach does not account for such changes as: the emergence of new ports and subsequent changes in waterway traffic patterns; seafloor changes from weather and oceanic processes, and the resulting need for some areas to be surveyed more often than others; and sizes and capabilities of ships, with many of them having deeper drafts since NOAA developed its plan in 1994. In addition, NOAA officials noted that the existing approach has focused on large container ships and oil tankers and not the many smaller vessels (e.g., fishing vessels and recreational boats) that also rely on NOAA hydrographic survey data to navigate safely. According to NOAA documents, the new model—which NOAA refers to as a “hydrographic health” model—will help NOAA identify survey needs by taking advantage of new technologies and more precise information about weather and oceanic processes. For example, agency officials said that with the advent of a Global Positioning System-based technology known as the Automatic Identification System, NOAA has data on the actual paths of vessels equipped with this technology, including when and where vessels have travelled as well as their length, width, and draft. The new model also analyzes information that is similar to what NOAA currently uses, such as (1) areas of shallow seafloor depth, (2) unsurveyed areas, (3) known or reported discrepancies on the nautical chart for an area, (4) reported accidents, (5) stakeholder requests, and (6) established national priorities. NOAA officials said they completed a test of the new hydrographic health model in 2016 for coastal waters in the southeastern United States— including coastal Alabama, Florida, and Georgia—and solicited feedback on the model from internal stakeholders. NOAA also presented the model at an international hydrographic conference in May 2016 and began using the model in the second quarter of fiscal year 2017. NOAA officials said the agency is preparing to submit a paper describing this model to an international hydrographic journal for peer review in the second quarter of fiscal year 2018. NOAA officials said they will incorporate the peer review feedback into the model in the third quarter of fiscal year 2018. NOAA also plans to release periodic reports describing the state of the hydrographic health of the nation’s waters after the model is fully implemented, according to the standard operating procedure. NOAA prepares an annual report that compares the cost of collecting its own hydrographic survey data to the cost of procuring such data from the private sector. According to NOAA’s standard operating procedure for conducting this cost analysis, the purpose of the analysis is to track and report the full cost of the hydrographic survey program, detailing costs for all activities that directly or indirectly contribute to the program. Specifically, NOAA’s standard operating procedure for preparing the annual cost comparison report states that the report should include, by fiscal year, all costs that directly or indirectly contribute to conducting hydrographic surveys, regardless of funding sources. According to NOAA’s standard operating procedure, to create the report, NOAA annually obtains data on survey costs for the previous fiscal year from the various NOAA offices involved in collecting hydrographic survey data. These offices collect cost data from staffing and financial data systems and enter the information into a spreadsheet, according to NOAA officials and NOAA’s standard operating procedure. NOAA documentation indicates these data include direct costs NOAA incurs to collect hydrographic data using its own ships; these direct costs include equipment and maintenance, labor, and fuel. In addition, according to NOAA officials and NOAA’s standard operating procedure, NOAA obtains data on indirect costs, such as administrative costs apportioned to the hydrographic survey program and amounts paid to the private sector for conducting surveys. In 2005, NOAA began reporting hydrographic survey costs in an annual cost comparison report in response to a 2003 recommendation from the Department of Commerce Office of Inspector General that NOAA track and report the full costs of its survey program. In addition, in 2005, the Hydrographic Services Review Panel recommended that NOAA use actual costs rather than estimates and “reasonably follow” Office of Management and Budget Circular A-76 guidelines to calculate the cost comparison; these guidelines state, among other things, that capital assets should be depreciated in cost estimates. Based on our review of NOAA’s cost comparison reports for fiscal years 2006 through 2016, NOAA did not in all instances report complete or accurate cost data for its hydrographic survey program. Specifically, NOAA did not include the complete cost of the hydrographic survey program for the following activities: Vessel acquisition. NOAA did not include the 2012 acquisition cost of a NOAA survey vessel (the Hassler) in its cost comparison reports from fiscal years 2012 through 2016. According to NOAA documentation, this vessel cost $24.3 million, and NOAA officials agreed that they should include the acquisition cost of NOAA vessels in cost comparison reports and that such costs should be depreciated. NOAA officials said they have not included such costs in annual cost comparison reports because depreciation costs are tracked in NOAA’s property management system but not in NOAA’s budget tracking system. These officials said they are uncertain whether these two systems can be linked because they are separate databases managed by different NOAA offices. Major vessel maintenance. NOAA did not include the cost of major maintenance performed in 2010 on the hydrographic survey vessel Rainier in its cost comparison reports from fiscal years 2010 through 2016. According to NOAA officials, the agency spent $13.7 million in support of maintenance for the Rainier. NOAA officials acknowledged that such costs should be reflected in NOAA’s cost comparison reports and that such costs should be depreciated. NOAA officials explained that they allocate annual maintenance and repair costs associated with the hydrographic survey program according to the number of days a ship is at sea conducting surveys. In this case, they said because the Rainier was in port the entire year undergoing repairs, they did not include these capital improvement costs in the cost comparison report. Contract administration for private sector hydrographers. NOAA did not include in its cost comparison reports for fiscal years 2006 through 2016 contract administration costs for managing private sector hydrographers working under contract to the agency. NOAA’s standard operating procedure for conducting the annual cost analysis specifies that the agency should include the costs associated with contract management and monitoring. NOAA officials said these costs were not included in the reports in part because they did not have the software to track contract administration costs. NOAA officials acknowledged that they should include such costs in the cost comparison report. In addition to incomplete costs for some activities, we also noted that NOAA did not accurately report certain costs of the hydrographic survey program in the year to which those costs should be assigned. Equipment, repair, and maintenance costs. NOAA includes equipment, repair, and maintenance costs in the hydrographic survey cost comparison report for the year in which such costs are reported in NOAA’s financial system. However, as with major vessel maintenance costs previously discussed, NOAA officials acknowledged that these costs should be depreciated. As a result of this practice, NOAA’s hydrographic survey costs may appear artificially high during years in which NOAA incurs large equipment, repair, and maintenance costs. NOAA officials said they recognize that reporting equipment, repair, and maintenance costs in the year they are incurred does not accurately represent agency costs. Cost and performance data for survey work conducted by the private sector. NOAA does not track cost data in a way that allows the agency to link the cost for private sector surveys to the amount of survey work conducted. For example, in the cost comparison report for fiscal year 2014, NOAA included funds that were obligated for two contractors to conduct survey work, but the report showed that these contractors did not survey any nautical miles during that year. NOAA officials explained that they obligated funds in fiscal year 2014 to pay for the contract survey work, but the contractors did not begin the work until fiscal year 2015. These officials stated that they record contractor costs in the year in which the obligation occurs, and they record the miles surveyed in the year in which the surveying occurs. However, the 2014 cost per square nautical mile may appear artificially high because costs were recorded without including corresponding mileage surveyed. In contrast, the 2015 cost per square nautical mile may appear artificially low because survey miles were recorded, but the costs for conducting those surveys were not included in the 2015 report. NOAA officials acknowledged that their current method for tracking contractor costs and work performed needs improvement. They explained that the data inaccuracies arise in part from NOAA’s current process for tracking contractor cost and performance through manual entry of data into multiple spreadsheets. Furthermore, we found that NOAA uses a single measure—cost per square nautical mile surveyed—to compare its own survey costs to those of its contractors. However, in 2005, the Hydrographic Services Review Panel concluded that a single cost measure, such as the cost per square nautical mile, should not be used as the primary factor to determine the relative cost-effectiveness of NOAA and private sector efforts to collect hydrographic data. The panel recommended that NOAA consider a wider variety of measures to help provide additional insight. NOAA officials acknowledged that the cost per square nautical mile was not a comprehensive measure of cost-effectiveness and that having additional measures would improve the accuracy of cost comparisons to account for factors such as region and water depth. As a result of the concerns we identified, during our review, NOAA officials began identifying actions they would take to improve NOAA’s cost data. In some instances, officials identified specific steps and associated time frames to carry out these actions. For example, NOAA officials said they started using new project management software in fiscal year 2017 to help track contract administration costs for inclusion in future cost comparison reports. In addition, to allow NOAA to better link the costs for private sector surveys to the amount of survey work conducted, NOAA officials said they plan to develop a new database by March 2018; this database would help eliminate the need for manual data entry and allow NOAA to track survey cost and performance data for various time frames and regions. To improve NOAA’s ability to compare its own survey costs to those of contractors, NOAA officials said they were in the process of developing additional survey measures beyond cost per square nautical mile that could include a new “survey complexity rating” designed to account for factors such as region and water depth. Officials said they expect to have these additional measures in place by October 2018. However, NOAA officials could not yet identify the steps or associated time frames for carrying out other actions to improve the completeness and accuracy of cost data. For example, to help improve NOAA’s process for tracking depreciation costs of capital assets—such as vessel acquisition or equipment, repair, and maintenance—NOAA officials said they planned to implement an improved process in fiscal year 2019 but did not identify the specific steps to implement this process. In addition, to account for ships that are in port undergoing major maintenance, NOAA officials said they plan to develop a tracking system to help ensure such maintenance costs are included in NOAA’s cost comparison reports, but they did not provide additional specific details or identify when they intend to implement such a system. For these recently identified actions, NOAA officials explained that it was uncertain how NOAA would proceed because identifying and implementing certain steps requires the coordination of multiple offices within NOAA such as the Office of Coast Survey, Office of Marine and Aviation Operations, and Office of the Chief Administrative Officer. Without ensuring that its efforts to improve its cost comparison reports include actions to fully track capital asset depreciation costs and account for ships in port undergoing major maintenance, NOAA may be unable to prepare cost comparison reports that reflect the full cost of its hydrographic survey program, as called for in the agency’s standard operating procedure. NOAA has taken steps aimed at increasing private sector involvement in its hydrographic data collection program, such as streamlining its contracting process and increasing communication with contractors. However, NOAA has not developed a strategy for expanding its use of the private sector as required by a 2009 law. According to NOAA officials, NOAA has taken several steps to increase private sector involvement in its hydrographic data collection program. For example, NOAA developed a centralized process for competing and awarding contracts in 2003, which NOAA officials said reduced administrative costs and contract award time. Before this change, NOAA awarded contracts to individual contractors at the regional level, which required expending resources to process each individual contract. As a result of implementing a centralized process for competing and awarding contracts, NOAA officials said they increased the number of private sector firms under contract, from five during the 2003-2008 contract period to eight during the current 2014-2019 contract period. However, NOAA officials said they have not awarded task orders for surveys to all eight private sector firms in the same fiscal year because of NOAA’s appropriation, which has remained mostly flat during the current contract period. NOAA also took steps to increase communication with contractors, according to NOAA officials. For example, starting in 2005, NOAA has invited hydrographic survey contractors to its annual field procedures workshop, which brings together officials from NOAA’s headquarters, field offices, and quality assurance processing branches, among others. The purpose of the workshop is to discuss updates to hydrographic survey requirements and new hydrographic survey technologies. Also, since 2005, according to NOAA officials, contracting officer representatives have improved their communication with contractors through the various stages of the contract and survey activities by answering contractors’ questions regarding project requirements, expected deliverables, data processing, and unanticipated challenges that may occur when conducting surveys. In addition, NOAA officials said that in 2010, the agency implemented procedures for obtaining contractor input on changes to its hydrographic survey technical specifications document, the Hydrographic Surveys Specifications and Deliverables. The document is updated annually, and contractors are asked to provide input through their respective contracting officer representatives. Staff review input to determine whether to include the recommended action in the annual technical specifications update. According to NOAA officials, participants discuss recommended changes at meetings held during the annual field procedures workshop. NOAA has not developed a strategy for expanding its use of the private sector in its hydrographic survey data collection program, as required by law. Specifically, the Ocean and Coastal Mapping Integration Act required the NOAA Administrator to transmit a report to relevant congressional committees by July 28, 2009, that described the agency’s strategy for expanding contracting with the private sector to minimize duplication and take maximum advantage of private sector capabilities in fulfilling NOAA’s mapping and charting responsibilities. NOAA officials could not provide us any documentation indicating what information the agency provided to Congress in response to this statutory requirement. In 2010, NOAA issued its Ocean and Coastal Mapping and Contracting Policy, which states that the policy was developed in response to the act. However, rather than describing a strategy for expanding contracting with the private sector, as required by the 2009 law, the policy states that it is NOAA’s intent to contract with the private sector for ocean and coastal mapping services when the agency determines it is cost- effective to do so and funds are available. NOAA officials acknowledged that the contracting policy does not meet the statutory requirement that the agency develop a strategy for expanding contracting with the private sector. NOAA officials said the agency is limited in its ability to expand private sector contracting because of congressional direction on the use of the agency’s appropriations. Specifically, NOAA’s hydrographic survey program is supported by two separate funding elements, known as “Programs, Projects, and Activities” (PPA), within NOAA’s Operations, Research, and Facilities appropriation account. One PPA is for private sector hydrographic data collection, and the other is for general operations, maintenance, and repair of NOAA’s entire fleet of ships, including the hydrographic survey vessels. According to NOAA officials, the agency has limited authority to reprogram funds between these two PPAs without congressional notification and agreement that such reprogramming is warranted. To propose a reprogramming of funds, NOAA officials said they would need to evaluate the prioritization of all fleet missions. In addition, NOAA officials said they would have to continue to fund fixed operational costs and agency expenses for NOAA’s entire fleet even if operations funds were reprogrammed to hydrographic data acquisition contracts. NOAA officials said the agency intends to develop a strategy describing how it plans to expand private sector involvement in the hydrographic data collection program—which the Ocean and Coastal Mapping Integration Act required the agency to submit in a report to relevant congressional committees in 2009—and it will use the 2010 Ocean and Coastal Mapping and Contracting Policy to guide this effort. These officials said the agency must first implement its planned improvements in collecting both NOAA and private sector hydrographic survey costs; once NOAA has a more accurate basis on which to compare costs, the agency will assess the extent to which it can expand its use of the private sector and develop a strategy accordingly. These officials said that if their analysis indicates the agency should expand its use of the private sector beyond what is currently possible given agency appropriations, the agency will request changes to its appropriations to allow it more flexibility in expanding its use of the private sector. However, NOAA officials did not provide specific information about how they intend to develop the strategy, what elements it will contain, or when it will be completed. Without developing such a strategy, NOAA may have difficulty minimizing duplication and taking maximum advantage of private sector capabilities in fulfilling NOAA’s mapping and charting responsibilities. Recognizing the importance of nautical charts to help ensure safe passage of people and goods through the nation’s waterways, NOAA has taken steps to improve its ability to set priorities for collecting hydrographic data. NOAA also prepares annual reports that compare the costs of NOAA conducting its own hydrographic surveys to the costs of contracting for such surveys. NOAA’s standard operating procedure requires the agency to track and report all costs for the hydrographic survey program. However, NOAA has not determined how it will track depreciation costs of capital assets or established time frames to improve its tracking of major maintenance costs for vessels. Without ensuring that its efforts to improve its cost comparison reports include actions to fully track capital asset depreciation costs and account for ships in port undergoing major maintenance, NOAA may be unable to prepare cost comparison reports that reflect the full cost of its hydrographic survey program, as called for by the agency’s standard operating procedure. In addition, NOAA was required by law to develop a strategy for expanding its use of the private sector in its hydrographic survey program, but it has not done so and has not provided specific information on how and when it will. Without such a strategy, NOAA may have difficulty minimizing duplication and taking maximum advantage of private sector capabilities in fulfilling NOAA’s mapping and charting responsibilities. We recommend that the Secretary of Commerce direct the NOAA Administrator to take the following two actions: ensure that NOAA’s efforts to improve its cost comparison reports include actions to fully track capital asset depreciation costs and account for ships in port undergoing major maintenance in accordance with its standard operating procedure, and develop a strategy for expanding NOAA’s use of the private sector in its hydrographic survey program, as required by law. We provided a draft of this report to the Department of Commerce for review and comment. NOAA, responding on behalf of Commerce, stated in its written comments (reproduced in app. II) that it agreed with our two recommendations. Regarding our recommendation related to improving NOAA’s cost comparison reports, NOAA agreed that its cost estimates should include the depreciation costs of new vessels once they are operational and stated that it will work to obtain an accurate depreciation schedule. NOAA also stated that it will take steps to improve its tracking and reporting of depreciation costs for equipment and repair and maintenance, including its accounting for ships in port undergoing major maintenance. Regarding our recommendation that NOAA develop a strategy for expanding its use of the private sector in hydrographic surveying, NOAA stated that the agency will develop such a strategy once it improves its approach for comparing its hydrographic survey costs to those of the private sector. NOAA also provided one technical comment, which we incorporated. We are sending copies of this report to the appropriate congressional committees, the Secretary of Commerce, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions regarding this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to the report are listed in appendix III. The National Oceanic and Atmospheric Administration (NOAA) has issued standards—known as the Hydrographic Surveys Specifications and Deliverables (HSSD)—for all hydrographic survey data collected by both private sector contractors and NOAA staff. NOAA maintains a quality assurance program for these data that includes three main review procedures (described below). The HSSD standards for conducting hydrographic surveys are based in part on the International Hydrographic Organization’s Standards for Hydrographic Surveys. These standards pertain to hydrographic surveys that are intended for harbors, harbor approach channels, inland navigation channels, and coastal areas of high commercial traffic density, and they generally pertain to shallower areas less than 100 meters in depth. According to NOAA officials, the HSSD has been reviewed annually since its initial publication in 2000, and NOAA has procedures in place to obtain suggestions from private sector contractors regarding changes to the HSSD. For example, at its annual field procedures workshop, NOAA conducts a session on data quality review standards and practices, and it solicits recommendations for changes to the HSSD from both NOAA staff and private sector hydrographers. According to NOAA officials, contractors submitted fewer than 10 recommendations in 2016 but submitted more than 30 recommendations in 2017. All recommended changes to the HSSD are reviewed by the Office of Coast Survey’s Hydrographic Surveys Division, Operations Branch. Recommendations are then forwarded to the Office of Coast Survey Board of Hydrographers for review, and the survey board submits its recommendations to the Chief of the Hydrographic Surveys Division for final approval. NOAA’s hydrographers test the feasibility of many significant changes to the HSSD before they are put into practice by private sector hydrographers. In June 2016, NOAA approved a new position specifically to oversee and coordinate efforts related to hydrographic specifications, recommended procedures, and training. According to NOAA officials, they intend to fill the position in August 2017. NOAA officials said the HSSD is also the standard on which many other international hydrographic entities base their hydrographic surveying requirements and is widely utilized by the hydrographic mapping community. According to NOAA officials, examples of uses of HSSD are: The hydrographic specifications section of the National Society of Professional Surveyors/Hydrographic Society of America certified hydrographer exam is based in part on the HSSD. The University Oceanographic Laboratory System Multibeam Advisory Committee references the HSSD in its specifications for multibeam sonar calibrations. The only two U.S. universities with graduate programs in hydrography—the University of New Hampshire and the University of Southern Mississippi—rely on the HSSD as part of their programs. In addition, NOAA officials said the Office of Coast Survey has worked with different entities to help ensure that data collected by these entities meet HSSD specifications so that the data can be used on NOAA’s nautical charts. For example, officials said the office has worked with: the New Jersey Department of Transportation since 2014 on survey data the department is collecting for all New Jersey coastal waters; Coastal Carolina University since 2015 on survey data the university is collecting for the Bureau of Ocean Energy Management, an agency within the Department of the Interior; and the University of South Florida since 2016 on survey data the university is collecting for a significant portion of western Florida’s coastal waters. NOAA’s quality assurance program includes three main review procedures intended to ensure that hydrographic data submitted to NOAA meet quality standards: the Rapid Survey Assessment, Survey Acceptance Review, and Final Survey Review. Rapid Survey Assessment. NOAA’s hydrographic survey data processing branches located in Seattle, Washington, and Norfolk, Virginia, are responsible for initiating a hydrographic survey data “rapid survey assessment” within 5 working days of survey data being delivered to NOAA by private sector contractors and NOAA staff. According to NOAA documentation, the assessment, which should be completed within 2 working days, is intended to improve data quality by quickly identifying significant deficiencies in hydrographic survey data products. The assessment helps ensure the survey data meet HSSD technical requirements and project-specific instructions that are issued at the start of each survey project. If the assessment finds significant deficiencies, NOAA’s assessment team may make corrections itself or may return the survey to the hydrographer for rework and resubmission. The hydrographic data processing branches take several factors into consideration when deciding whether to return a survey for rework, such as whether the hydrographers are capable of fixing the error, whether there is value in returning a survey for the purpose of educating the hydrographers to prevent future similar errors, and whether it is faster and more efficient for the processing branch to make corrections. According to NOAA documentation, even if no deficiencies are found, passing the data through this initial assessment does not preclude the processing branch from returning the survey to the field hydrographers for rework and resubmission later in the quality assurance process if significant deficiencies are subsequently found. Survey Acceptance Review. The survey acceptance review is a detailed evaluation and acceptance of hydrographic survey data conducted by the data processing branches in Seattle, Washington, and Norfolk, Virginia. According to NOAA documentation, the survey acceptance review process includes: (1) accepting the survey data from the field hydrographers, (2) evaluating the data and products delivered by hydrographers for deficiencies and deviations from the guidance documents, (3) conducting an internal review of the survey acceptance review process to validate that process, and (4) outlining the findings from the survey acceptance review process and transferring responsibility for the integrity and maintenance of the survey data from the field hydrographer to the processing branch. The survey acceptance review involves several compliance checks and is intended to confirm that the survey data are accurate and to highlight the strengths and weaknesses of the data. A key element of the survey acceptance review is performing quality assurance checks on the survey data to ensure the survey was performed to the standards required in guidance documents, including the HSSD, NOAA’s hydrographic field procedures manual, and any hydrographic survey project-specific instructions. Upon completion of the survey acceptance review, an internal review is conducted to verify that the survey acceptance review was completed in accordance with relevant standard operating procedures, and that any issues outlined in the review documentation are consistently delineated. After the internal review is completed and approved, the completed documentation is forwarded to the Processing Branch Chief for review. The final output of the review process includes an acceptance letter to the Hydrographic Surveys Division Chief through the Processing Branch Chief outlining any findings from the review and releasing the field hydrographers from further responsibility for the data. Figure 5 illustrates the survey acceptance review process. Final Survey Review. The NOAA contracting officer’s representative is responsible for the final quality assurance review for each hydrographic survey project. According to NOAA officials, this is a critical stage, as the contracting officer’s representative has been involved at every stage of the survey, from planning and technical evaluation to survey monitoring, including at least one inspection visit with the contractor during the survey time frame. The contracting officer’s representative is the primary point of contact when the contractor seeks guidance to resolve technical issues. During the final review, the contracting officer’s representative reviews the survey to ensure it is complete—this is the last stage of quality assurance review before the data are archived and made available to the public. In addition to the individual named above, Steve Gaty (Assistant Director), Leo Acosta (Analyst-in-Charge), Martin (Greg) Campbell, Patricia Farrell Donahue, Timothy Guinane, Benjamin Licht, J. Lawrence Malenich, Ty Mitchell, Guisseli Reyes-Turnell, Danny Royer, Jeanette Soares, and Arvin Wu made key contributions to this report.
NOAA is responsible for collecting hydrographic data—that is, data on the depth and bottom configuration of water bodies—to help create nautical charts. NOAA collects data using its fleet and also procures data from the private sector. The Hydrographic Services Improvement Act of 1998 requires NOAA to acquire such data from the private sector “to the greatest extent practicable and cost-effective.” GAO was asked to review NOAA efforts to collect hydrographic data. This report examines (1) how NOAA determines its hydrographic survey priorities, (2) NOAA's efforts to compare the costs of collecting its own survey data to the costs of procuring such data from the private sector, and (3) the extent to which NOAA has developed a strategy for private sector involvement in hydrographic data collection. GAO analyzed relevant laws and agency procedures, NOAA cost comparison reports from fiscal years 2006 through 2016, and other NOAA information, such as hydrographic survey program priorities. GAO also interviewed NOAA officials and the eight survey companies that currently have contracts with NOAA. The Department of Commerce's National Oceanic and Atmospheric Administration (NOAA) uses a three-step process to determine its hydrographic survey priorities, according to agency documents and officials. NOAA first identifies areas in greatest need of surveying by analyzing data such as seafloor depth, shipping tonnage, and the time elapsed since the most recent survey. Second, the agency evaluates the availability of funding resources as well as the availability and capability of NOAA and private sector hydrographic survey vessels. Third, NOAA develops an annual hydrographic surveying plan that identifies survey priorities. To help inform the first step in this process, NOAA is developing a model to take advantage of new mapping technologies. NOAA prepares an annual report comparing the cost of collecting its own hydrographic survey data to the cost of procuring data from the private sector but does not include all costs in its cost comparisons. Under its standard operating procedure, NOAA is to report the full cost of the hydrographic survey program, including equipment, maintenance, and administrative costs. GAO's review of NOAA's cost comparison reports from fiscal years 2006 through 2016, however, found that NOAA did not in all instances report complete or accurate cost data. For example, NOAA did not include the acquisition of a $24 million vessel in 2012, and in some cases it did not report certain costs in the year to which those costs should be assigned. NOAA officials said they recognized the need to improve the agency's tracking of costs, and they identified actions they intend to take but did not always provide information about specific steps to carry out these actions or associated time frames. For example, NOAA officials said they planned to implement an improved process in fiscal year 2019 for tracking the costs of capital assets such as vessels but did not identify specific steps to do so. They also said they plan to develop a system to better track maintenance costs but did not provide specific details or a time frame to do this. Without ensuring that its efforts to improve its cost comparison reports include actions to fully track asset and maintenance costs, NOAA may be unable to prepare cost comparison reports that reflect the full cost of its survey program, as specified in the agency's standard operating procedure. NOAA has taken steps to increase private sector involvement in its hydrographic data collection program but has not developed a strategy for expanding such involvement as required by law. For example, NOAA moved to a centralized process for competing and awarding contracts, which NOAA officials said reduced administrative costs and contract award time and allowed NOAA to increase the number of private sector firms under contract from five to eight. However, NOAA did not develop a strategy for expanding its use of the private sector to minimize duplication and take maximum advantage of private sector capabilities, as required by law. NOAA officials said the agency intends to develop such a strategy but must first make improvements in its approach to comparing its own hydrographic survey costs to those of the private sector. However, NOAA officials did not provide specific information about how they intend to develop the strategy, what elements it will contain, or when it will be completed. Without developing such a strategy, NOAA may have difficulty minimizing duplication and taking advantage of private sector capabilities. GAO recommends that NOAA (1) ensure that its efforts to improve its cost comparison reports include actions to fully track asset and maintenance costs and (2) develop a strategy for expanding private sector involvement in the hydrographic survey program. NOAA agreed with GAO's recommendations.
In 1978, Congress deregulated the airline industry, phasing out the federal government’s control over domestic fares and routes served and allowing market forces to determine the price, quantity, and quality of service. Free to determine which communities they would serve, as well as what fares they would charge, most major carriers became “network” carriers, developing “hub-and-spoke” networks and providing service from their hubs to many “spoke” cities they served. Anticipating that airlines would be free to focus their resources on generally more profitable high-density markets, Congress became concerned that major carriers would eliminate their less profitable routes serving smaller communities, causing these communities to lose air service. In response, Congress established the Essential Air Service (EAS) program as part of the Airline Deregulation Act of 1978. The EAS program subsidizes commercial air service for communities that would otherwise have lost service as a result of deregulation. The law specifies that if an air carrier cannot continue service to a community without incurring a loss, DOT shall then use EAS program funds to award a subsidy to that carrier or another carrier willing to provide service. Congress initially enacted the program for 10 years, and later extended it for another 10 years. In 1996, Congress removed the 10- year time limit. Under the Airline Deregulation Act, communities that were eligible for air service on October 24, 1978, are eligible for EAS-subsidized service. There are EAS-eligible communities in 49 states, Puerto Rico, and American Samoa. As of November 2008, DOT had agreements with carriers to provide subsidized service to almost 150 communities—102 in the continental United States, 43 in Alaska, and 2 in Puerto Rico. Not all communities that are eligible for EAS service currently receive it; many currently have unsubsidized air service. Figure 1 shows the communities that had access to EAS service as of January 1, 2009, or are projected to ojected to have service starting later in the year. have service starting later in the year. Communities near airports with EAS service vary in their population. For example, 58 percent of the communities within 40 miles of an airport with EAS-subsidized service as of January 1, 2009, had a population of less than 10,000 while 2 percent had a population of over 100,000. A multistep process is required for subsidized EAS service to begin at a community. For a community that is not currently receiving EAS subsidies, the process starts when the last air carrier providing unsubsidized service to an EAS-eligible community files a Notice of Termination, which is a 90-day notice of its intent to suspend, terminate, or reduce service below the minimum level of service required by law. If no other air carrier is willing to provide unsubsidized air service to the community, DOT solicits proposals from carriers that would be willing to provide service with a subsidy. Carriers requesting a subsidy must document that they cannot profitably serve the community without a subsidy by submitting various financial data, such as profit-or-loss statements, to DOT. DOT then reviews these data along with information about the aviation industry’s pricing structure, the size of aircraft required, the amount of service required, and the number of projected passengers who would use this service. DOT also considers the community’s preferences for the proposed service. Finally, DOT selects a carrier based on statutory selection criteria and sets an annual subsidy amount intended to compensate the carrier for the amount by which its projected operating costs exceed its expected passenger revenues as well as a profit element of at least 5 percent of total operating expenses, according to statute. Once air service is under way, DOT makes monthly subsidy payments to the carrier based on the number of scheduled flights completed. DOT’s agreement with the carrier is subject to renewal generally every 2 years, at which time other air carriers are permitted to submit proposals to serve that community with or without a subsidy. In general, the law currently requires that an EAS carrier provide the following: service to a hub airport, defined as an Federal Aviation Administration (FAA)-designated medium- or large-hub airport; two daily round trips, 6 days a week, with not more than one intermediate stop to the hub; flights at reasonable times taking into account the needs of passengers with connecting flights; service in an aircraft with an effective capacity of at least 15 passengers, under certain circumstances, unless the affected community agrees in writing to the use of smaller aircraft; service in aircraft with at least two engines and using two pilots; and service with pressurized aircraft under certain circumstances. Congress and DOT revised the program’s eligibility requirements during the late 1980s and early 1990s, in response to insufficient program funding. For example, in June 1989, Congress prohibited DOT, beginning in fiscal year 1990, from subsidizing service to or from any essential air service point in the contiguous 48 states where the subsidy exceeded $300 per passenger. In December 1989, DOT implemented a regulation that, among other requirements, would eliminate EAS funding for communities that had EAS service with a per-passenger subsidy exceeding $200 per person, or that were located less than 70 highway miles from the nearest medium- or large-hub airport, if appropriations for the EAS program were less than the amount needed to maintain EAS service at the communities being served. The Aviation Safety and Capacity Expansion Act of 1990 superseded this regulation by prohibiting DOT from declaring any community ineligible for any reason not specifically set forth in statute. Finally, in fiscal year 1994, Congress prohibited DOT from subsidizing service to communities that (1) are less than 70 highway miles from the nearest medium- or large-hub airport, or (2) require a per-passenger EAS subsidy in excess of $200. Communities located more than 210 miles from the nearest medium- or large-hub community airport are exempt from this $200-per-passenger subsidy limit. Over the years, several communities have lost eligibility for EAS service for various reasons. In some instances—after the requirements went into effect—-it was because the per-passenger subsidy for their service exceeded the allowable limit, or because the community was less than 70 miles from a medium- or large-hub airport. Other communities lost EAS service in the early 1990s as Congress took actions to address program funding constraints. DOT monitors participating air carriers’ operations to help ensure their service complies with program requirements. For example, DOT periodically reviews carriers’ enplanement data for the EAS routes carriers serve, to determine whether the carriers’ per-passenger subsidy exceeds the statutory cap of $200. Because DOT’s subsidy payments to carriers are based on the number of flights completed, regardless of the number of passengers on board, an EAS route with few passengers has a higher per-passenger subsidy than it would have with more passengers. When DOT does find that a carrier’s subsidy per passenger exceeds $200 for an EAS route, the agency warns the community of its tentative decision to terminate the route subsidy and allows the community 20 days to object if the community finds that DOT has made a mistake in its calculations. Since 1989, 61 communities have lost EAS service because they became ineligible to receive subsidized service. Twenty-six communities lost service in fiscal year 1990 as a result of reduced program funding. Six of these communities lost service as October 1989 because their carrier’s subsidy per passenger exceeded the $300 limit then in effect, and 20 more lost service as of January 1990 because their carrier’s per-passenger subsidy was over $200. Twelve communities lost service in fiscal year 1994, a year when funding for the EAS program was reduced, because their carrier’s per-passenger subsidy exceeded $200 or because they were within 70 miles of a medium- or large-hub airport. Twenty-two more communities became ineligible at various times since fiscal year 1995 because their carrier’s per-passenger subsidy exceeded $200. One community became ineligible to receive subsidized service in 1995 because a nearby small hub was reclassified as a medium hub. Also, 11 communities that were not receiving EAS-subsidized service lost their eligibility for EAS service when the last unsubsidized carrier filed to suspend service at their airport and DOT determined that the community was ineligible because it was within 70 miles of a medium- or large-hub airport. The number of communities served by the EAS program in the continental United States has risen in recent years— from 87 communities as of June 1, 2003, to 102 communities as of November 1, 2008. The subsidies that carriers require to serve those routes have also increased since 2003, adding to the long-term cost of the EAS program. For example, the average annual subsidy DOT has awarded for EAS service per community in the continental United States increased from about $883,000 as of June 2003 to about $1,371,000 as of November 2008. After adjusting this growth for the effects of inflation, the average EAS subsidy in 2008 was about 35 percent higher than in 2003. In addition, significant increases in carrier subsidies per community have come within the past 2 years. Between November 2007 and November 2008, DOT renewed or awarded agreements to 57 communities in the EAS program in the continental United States, with the total annual subsidy for those communities increasing from $52.4 million to $86.3 million (in nominal dollars)—an increase of 65 percent. For many of these routes, the carrier’s annual subsidy amount more than doubled. While the number of EAS communities and the amount of subsidies have increased, annual obligations ranged between $103 million and $114 million (in nominal dollars) from fiscal year 2003 through fiscal year 2007. In fiscal year 2008, obligations for EAS subsidies increased to about $116 million. An additional $31 million in balances from completed EAS agreements that could not be retained for the EAS program was returned ed for the EAS program was returned to FAA, bringing total obligations to $147 million as shown in figure 2. to FAA, bringing total obligations to $147 million as shown in figure 2. While our review focuses on EAS service in the 48 contiguous states, obligations are reported for the EAS program as a whole, including obligations for Alaska, Hawaii, and Puerto Rico. However, EAS service outside of the 48 contiguous states does not represent a large portion of EAS program funding—DOT estimates service to these locations accounted for about 8 percent of total program subsidies as of 2008. communities. Specifically, in the first 6 months of 2008, three carriers serving 37 communities ceased operations. Most of these communities were without service for several months because replacement carriers were not able to start up immediately. Keeping pace with the rising financial requirement to manage the program, total appropriations for the EAS program have generally increased in recent years. Total appropriations have increased from about $102 million in fiscal year 2003 to just over $124 million in fiscal years 2007 and 2008. For fiscal year 2009, appropriations available for the program include $123 million in fiscal year 2009 appropriations and a supplemental appropriations act which provides an additional $13.2 million in fiscal year 2009 supplemental funding for the EAS program. This increases EAS’s fiscal year 2009 appropriations to $136.2 million. The administration has requested about $175 million for the program in 2010, which would represent a further increase in program funding. EAS program funding comes from multiple sources. Each year, the EAS program receives $50 million in overflight fees. Recently, Congress also has annually appropriated additional funds from the Airport and Airway Trust Fund and has supplemented these EAS program funds in 2005, 2007, 2008, and 2009 with additional appropriations, as shown in figure 3. DOT had requested additional funding for 2005, 2007, and 2008 to account for the higher dollar amounts required to reimburse carriers for serving EAS communities. For example, in fiscal year 2005, DOT transferred $5 million from the Small Community Air Service Development Program, which provides grants to enhance small communities’ air service, to help fund the EAS program’s increased costs. Recently, DOT officials have been concerned about whether the EAS program has sufficient funding to serve both current EAS communities and additional communities that may be eligible for subsidized service. The EAS program is appropriated a specific amount each fiscal year. However, since fiscal year 2005, language has been included in appropriations legislation stating that if the annual amount provided for EAS is insufficient to meet the costs of the EAS program in the current fiscal year, the Secretary of the Department of Transportation is required to transfer funds to EAS from any other amounts appropriated to or directly administered by the Office of the Secretary. This would require DOT to draw upon other funding sources within the Office of the Secretary to be able to make payments to carriers and enter into new service agreements. DOT had to do this once, using some Small Community Air Service Development Program funding for the EAS program in 2005. In addition, a DOT official noted that the EAS program faces a significant potential financial liability, in that there are about 40 other EAS-eligible communities in the country with airports currently served by a single unsubsidized commercial carrier. DOT officials believe that the agency would encounter a significant financial liability—about $60 million annually—if the airlines serving these single-carrier airports all filed a Notice of Termination requiring DOT to subsidize continued service. In fact, three communities that have not previously had EAS service have come into the program since June 2008, and a fourth is expected to obtain subsidized service later this year. According to a DOT official, the EAS program has recently experienced an unusually high level of carrier turnover. In 2008 alone, three EAS carriers serving 37 communities ceased operations in the first 6 months of the year. According to a DOT official, various factors caused the three carriers to cease operations, and recent fuel price increases might have accelerated this situation. DOT was able to obtain a replacement carrier to continue service, without interruption, for one of the 37 communities. However, 30 of the other 36 communities were temporarily without EAS air service for up to 10 months, and 6 communities are still without service because the carrier that DOT selected in 2008 to serve those communities withdrew before it started service. An official of the carrier stated that it withdrew because it was unable to finance the refurbishing of aircraft needed to serve those routes. In late June 2009, DOT awarded agreements to two carriers to provide EAS service to these 6 communities; dates for the start of service had not been set. A DOT official noted that while the number of communities that experienced carrier turnover in 2008 was unprecedented, the number of carriers providing air service to communities under the EAS program has actually been declining over many years. The number of carriers providing EAS service has declined from 34 as of February 1987 to 10 in 2009. In addition, as the number of carriers has declined, the percentage of EAS routes served by just a few carriers has increased. In February 1987, the largest number of routes served by any one carrier was 13, and the four carriers that served the most communities accounted for 33 percent of the EAS routes. At present, four carriers serve about 85 percent of the routes in the EAS program, with a single carrier serving nearly half of the EAS routes. As noted above, one carrier recently withdrew from 6 EAS routes that it was awarded last year before it even started service. Also, DOT faces a potential rise in the number of communities requiring subsidized air service should their single unsubsidized carrier end operations. Should additional EAS carriers withdraw from the program or be financially unable to serve additional communities seeking EAS service—the remaining carriers may not have enough capacity to provide EAS service to all communities that qualify. Many of the expert panelists and other stakeholders we interviewed stated that some EAS program requirements significantly add to the cost of providing subsidized air service to communities. For example, members of our expert panel thought the EAS mandate requiring carriers to use aircraft with a 15-seat capacity for most communities presented the biggest challenge to providing and sustaining air service to communities under the EAS program. The mandate requires carriers to use larger aircraft than may be needed to adequately serve some communities. In addition, the 15-seat aircraft that this requirement was based upon are no longer available. Currently, to satisfy the 15-seat minimum, most EAS routes are served by 19-seat twin-engine turboprop aircraft. (See fig. 4 for 4 for an example of a 19-seat twin-engine turboprop aircraft.) an example of a 19-seat twin-engine turboprop aircraft.) According to industry representatives, these 19-seat turboprop aircraft used on many EAS routes are relatively costly to operate. First, the aircraft are no longer in production, are in limited supply, and are also relatively costly to acquire and refurbish to comply with current operating standards. Second, the “Commuter Safety Rule” which FAA implemented in 1997, has increased EAS carriers’ costs for operating 19-seat turboprop aircraft. Through the rule, FAA intended to increase safety by requiring aircraft in the 10-to-30 passenger range to meet more stringent safety requirements. The increased safety standards made some aircraft, including 19-seat turboprop aircraft, more costly to operate, because they required carriers to improve ground deicing programs, carry additional safety equipment for passengers, and comply with additional operating constraints. For example, an industry group, in a petition to DOT for exemptions from this rule, provided information showing that one EAS carrier’s training costs increased by almost 600 percent because of the additional training required for its captains by the revised rule. An EAS carrier official stated that the carrier’s cost to operate 19-seat aircraft, calculated as cost per passenger seat mile, is now about twice what it was in 1994 primarily due to these additional regulatory requirements. According to industry representatives, the increased operating costs associated with the required safety upgrades have contributed to some carriers’ decisions to eliminate their inventory of 19-seat planes. As a result, there are fewer airlines with the type of equipment suitable to serve most EAS routes. The EAS minimum service requirements may also require a carrier to provide more service than needed to meet the demands of a community and can therefore increase the carrier’s operating costs. For example, the EAS program statutes stipulate a minimum level of service for EAS subsidized routes—two daily round-trip flights, 6 days per week to a hub airport. Carriers flying 19-seat aircraft can be effectively locked into service that may not be right sized—that is, with capacity exceeding passenger demand—for some smaller markets, and possibly more costly than necessary to fulfill communities’ service needs. If the need to meet EAS program requirements results in carriers providing more capacity than some communities might be able to support, EAS service to those communities may be too costly for the carrier, leading it to withdraw from the EAS program. Further, the carriers’ 2-year agreements with DOT to provide EAS service can complicate the carriers’ efforts to lease aircraft to serve EAS routes. For example, some industry officials maintain that the 2-year agreements that DOT enters into with carriers can be too short because carriers often must lease aircraft for longer periods, such as 5 years. Therefore, a carrier entering into a 5-year lease to obtain aircraft to serve EAS routes risks having to maintain excess aircraft if it loses the routes after 2 years. However, DOT officials note that under the EAS program’s current funding structure, longer-term agreements would still be subject to availability of annual funding, so the agreement would not be guaranteed. Finally, spikes in fuel prices may add to EAS carriers’ costs and make it difficult to continue service. Although fuel prices typically vary over time, in 2008 fuel began to comprise an increasing portion of airlines’ costs, in some cases contributing to carriers ceasing operations. For example, one EAS carrier reported that its fuel costs increased from 28 percent of its operating costs in 2007 to 35 percent of its operating costs in 2008, although fuel prices began to decline late that year. We also found that last year, selected EAS carriers experienced a rapid and dramatic spike in fuel prices, as the average per-gallon fuel price for selected EAS carriers more than doubled between January 2007 and July 2008, before declining through December 2008, as illustrated in figure 5. December 2008 was that latest month for which fuel price data were available for these carriers. Legislation passed in 2003 explicitly provided DOT with the option of adjusting the subsidy paid to an EAS carrier if the carrier’s expenses substantially increased. However, according to an industry group that represents regional airlines and the majority of EAS carriers, DOT officials are generally not willing to renegotiate EAS agreements to reflect increased costs because the DOT officials are concerned about retaining sufficient funds to renegotiate the agreements and provide service for all the communities that may qualify for service. DOT officials indicated they are also concerned that establishing a policy of renegotiating subsidies upward for fuel costs could lead carriers to underestimate fuel costs in order to be selected as the carrier for a route, only to turn around soon after selection and ask for fuel rate relief. However, industry officials explained that if a carrier is unwilling to continue providing service under an EAS agreement because of operating cost increases, the carrier’s only recourse is to file a formal Notice of Termination with DOT of its intent to terminate service. For example, in June 2008, Mesaba Airlines filed such a notice informing DOT of its intent to terminate service at two communities in Michigan because of fuel price increases. Mesaba indicated that it would withdraw the notice if DOT agreed to apply a fuel adjustment to bring the EAS subsidy rate for the communities in line with current fuel conditions. DOT denied the request and rebid the routes. DOT eventually reselected Mesaba Airlines to serve the routes and awarded the airline a 28 percent increase over its previous annual subsidy for the routes. Still, industry and small airport officials said that filing a termination notice is an undesirable option for airlines because service interruptions and carrier turnover can negatively affect communities’ confidence in EAS service, and result in a further reduction in ridership. As the pool of carriers willing to provide EAS service declines, competition for EAS routes has also declined. For example, of the 37 routes that DOT awarded after three EAS carriers in 2008 ceased operations, 20 were awarded without competition, including 7 that were awarded to the one viable bidder remaining after the only other bidder went out of business. However, DOT officials informed us that their sealed-bid process prevents carriers from knowing whether there are competing bids from other carriers. They also indicated that they can reject bids that they believe are too high, and they can negotiate with the carrier. For instance, the officials cited a recent example of one carrier’s subsidy request of approximately $2.3 million being negotiated down to about $1.6 million. Nevertheless, a declining number of carriers willing to provide EAS service can reduce the level of competition among carriers for EAS routes. The viability of EAS routes also depends on the number of passengers that take EAS flights. According to DOT data, some EAS routes do not carry many passengers, creating a financial challenge for the carriers attempting to serve these communities. During fiscal year 2008, the average load factor—the percentage of available seats filled by paying passengers—was 37 percent across all EAS flights. By comparison, the average load factor for unsubsidized commercial flights nationwide has averaged about 80 percent in recent years. Two factors may contribute to the lack of passenger traffic on EAS flights. First, the EAS program has always served areas with limited population, but demographic shifts in the last 30 years may have reduced the population of some EAS communities, further limiting the potential passenger base for the local airport. Second, the EAS program loses potential passengers and fare revenue when low fares or more convenient air service schedules at nearby larger airports encourage passengers to bypass EAS service at their local airport in favor of driving or taking other transportation to the nearby airport. A significant degree of urbanization occurred throughout the 20th century as people moved out of rural areas and into cities and suburbs. Although much of this migration happened early and in the middle of the century, the trend has continued. Geographic areas, especially in the Midwest and Great Plains states, lost population between 1980 and 2007, as illustrated in figure 6. As a result, certain areas of the country are less densely populated than they were 30 years ago when Congress initiated the EAS program. Accordingly, some EAS communities’ reduction in ridership may be attributable, in part, to a smaller population base. Airports generally attract passengers from the surrounding population. However, people who live near smaller airports often choose to either drive to their destination or use larger airports that are farther away than their local airport. This phenomenon is typically referred to as “leakage.” Surveys of passengers as well as travel agents in communities served by small airports suggest that leakage can be widespread. For example, a travel agent survey in Arizona estimated that the small airports in that state often suffer significant leakage, in some cases as much as 90 percent. Another study we conducted found that EAS airports often serve less than 10 percent of the local passenger traffic, and that leakag is a significant factor. Moreover, it appears that some people may be willing to drive considerable distances—more than 150 miles—to get to a larger airport. The loss of passengers from an EAS route reduces the carrier’s fare revenues, while increasing the average per-passenger subsidy for that EAS service. Therefore, significant passenger leakage can lead to (1) the carrier seeking a larger subsidy from DOT, (2) the community losing service if the per-passenger subsidy rises above the $200 cap, or (3) the route becoming so costly for the carrier that it chooses to file a notice of intent to terminate service. Certain key factors appear to underlie the propensity of travelers to bypass small airports in favor of driving to larger airports. Fares for EAS flights are generally high, relative to fares on comparable unsubsidized flights. We analyzed calendar year 2007 fares on routes involving EAS airports and compared these fares to the fares for routes of similar distances involving only non-EAS airports. We found that fares for EAS routes tend to be considerably higher—on average about 50 percent higher—than fares for similarly distanced non-EAS routes. Our analysis did not attempt to identify reasons for the difference in fares between EAS and unsubsidized flights, but likely factors that could include the number of airlines serving the route, the number of passengers, and the portion of passengers paying the generally higher business fares on that route. Whatever the cause, relatively high fares for EAS flights can make those flights less attractive, compared to the alternative of driving to another airport. Studies of the use of airports in small communities have generally found that passengers may drive to nearby larger airports to obtain lower fares rather than use EAS service. The growth of low-cost carriers has created alternatives to EAS service. Fifteen of 18 experts on our panel cited the expansion of low-cost carriers as one of the biggest challenges facing EAS providers, and 9 of these panelists cited low-cost carrier expansion as the most important challenge to EAS providers. In the past decade, low-cost carriers have considerably expanded their networks; these carriers’ share of domestic airline capacity increased from 20 percent in 2000 to 29 percent in 2007. By 2007, low-cost carriers were serving virtually every large and medium-hub airport in the country as well as half of the small hubs. As low-cost carriers have extended service to more airports around the country, they provide more alternatives for community residents who can drive or take other transportation to other airports to get lower air fares offered by these carriers. Many industry stakeholders have said, and a previous GAO study has found, that community residents who reside near an EAS airport drive to other airports to obtain lower airfares, such as those that low-cost carriers offer. Larger airports tend to offer better service than that available at EAS airports. Larger airports are generally more attractive to travelers than small airports served by EAS flights because they offer more frequent flights and more nonstop destinations. EAS communities receive at least the required two daily round-trip flights, 6 days per week—although some communities receive more. Still, most EAS routes connect a community to a single airport. Such limited service may be too inconvenient to meet the needs of time-sensitive business travelers. Studies have found that a key reason passengers avoid small airports is the more frequent flight offerings at larger airports, which can be more convenient for travelers. So, if driving to a larger airport is feasible, a traveler from a community may choose that option to get a nonstop flight to his or her destination, instead of taking an EAS flight from the local community airport. Difficulties in making useful connections at the hub airports EAS carriers serve also discourage potential EAS passengers. For most EAS passengers, the hub airport where their EAS flight lands is not the end of their trip. Typically, EAS passengers need to transfer to a connecting flight to take them to their final destination. If the EAS flight takes passengers out of their way and increases their trip time, they may seek alternative travel options. Even if the EAS flight takes them in the direction of their final destination, limited EAS flight schedules may provide poor connection options. A representative of an airport in Iowa served by EAS-subsidized flights to Kansas City said it is hard to get business people to use the EAS flights because the flights often don’t match up well with the timing of connecting flights at Kansas City, resulting in long waiting times there. These problems promote passenger leakage away from EAS flights, when potential EAS passengers decide that traveling directly to larger airports is more practical. The problem is exacerbated as major carriers cut back their flights at the hub airports that are EAS destinations. For example, according to an official of one EAS carrier, connecting seats on flights out of two of their destination airports have decreased, reducing options for connecting flights, making the carrier’s EAS service to these airports less practical for passengers. As a result, the official said the carrier’s revenue on the routes serving these airports has declined significantly because potential passengers have decided to use other transportation to travel to a larger airport. Problems with EAS service reliability are another deterrent to using EAS service. Five of the seven representatives of EAS-served small airports who responded to our questions noted that the reliability of EAS service was a significant concern. According to one of these airport representatives, delays, cancellations, and route and schedule changes are commonplace in most EAS communities. Another airport representative noted that reliability of air service may be even more important at small airports than at larger airports, because a cancelled or delayed EAS flight leaves passengers with no other options. Some experts we spoke with indicated that this is a particular disincentive to business travelers, who may choose to drive to a larger airport. As we noted in our recent report on the financial health of the airline industry, the current economic recession is contributing to decreased industry-wide air travel. Beginning in the second quarter of 2008, passenger traffic began to decline, when compared with the same quarter in the prior year. By the third and fourth quarter of 2008, traffic fell off more significantly, and airlines reduced capacity to maintain their load factors—which would not be an option for EAS carriers, because these carriers cannot reduce service below the minimum level required by the program. The downward trend appears to be continuing, as industry demand for the first two quarters of 2009 was less than was expected as of the beginning of the year. Indications are that the economy also affects carriers providing EAS service. Reported passenger enplanements for the first quarter of 2009 for one EAS carrier are down about 26 percent from the same period 1 year ago, and the carrier’s load factors declined from 46 percent to 32 percent for that same period. Congress and others have been aware of the increasing difficulty EAS carriers face in providing service to communities. Congress, previous administrations, and GAO have proposed options to change the EAS program that might help address some of the program requirements that limit the flexibility of carriers providing EAS service or potentially increase costs of providing service—leading to carriers requiring higher subsidies from DOT. For example, DOT has proposed a number of options, but they have not been included in authorization or appropriations legislation. In addition, the House of Representatives’ proposal for reauthorizing FAA (H.R. 915) includes several options that could alter DOT’s management of the EAS program and possibly make program participation more attractive to carriers. This proposal is not yet through the legislative process. We also have described a number of similar options that could promote efficiencies in the EAS program. Again, none of these options have been adopted. Table 1 summarizes some of the key options that have been proposed. Each of the proposed options has potential advantages and disadvantages. Some options would be beneficial in certain circumstances, but not for all communities or all parts of the country. Further, not all stakeholders will likely agree on which options should be implemented, especially when different options produce different beneficiaries. Finally, different options will have different impact on federal program costs—some likely increasing total program costs, while others might decrease or limit program costs. The EAS program’s current statutory minimum service requirements— such as providing service with aircraft of at least 15-seats—may add to the cost of providing EAS service as discussed previously. Fifteen of the 17 members of our expert panel who addressed the issue of aircraft size indicated that giving carriers more flexibility to use smaller aircraft would make the EAS program more effective. Currently, communities entitled to 15-seat or larger aircraft can have EAS service with smaller aircraft only when they waive their rights to the larger aircraft. According to industry stakeholders, some communities are interested in having service from larger, at least 15-seat, planes because it is what the law provides for as well as for reasons including prestige and perceived concerns about comfort. Without this requirement for minimum aircraft size, a carrier would be allowed to “right size” or better match the services it provides with the communities’ demand, potentially reducing carrier operating costs as well as the subsidy needed from DOT and total federal program costs. Also, we previously reported that allowing carriers to provide EAS service with smaller aircraft could, on certain routes, be cost effective and better suit community needs. For example, officials of one EAS carrier which flies 9-seat Cessna 402 aircraft told us that their lower operating costs allow them to provide more frequent flights and charge lower fares than the previous carriers which flew 19-seat aircraft on those same EAS routes. (See fig. 7 for an example of a 9-seat twin-engine aircraft.) This change has yielded significantly increased passenger ridership. According to the officials, in the first 10 months of service on one of their EAS routes, passenger ridership has gone up 143 percent compared to the previous EAS carrier’s ridership for a comparable period. In addition, the EAS program manager stated that if he could make one recommendation to Congress, he would suggest that Congress eliminate the 15-seat requirement because a few EAS carriers are providing good service with smaller aircraft. A disadvantage of this option is that smaller aircraft might not be suitable for all parts of the country. So, while this could be an option for certain routes, it would not fully replace the use of larger aircraft. For example, officials of the carrier that operates the 9-seat Cessna aircraft told us that the aircraft are not pressurized and may not be practical in mountainous areas in the west. Also, one airport representative believed that people would be more reluctant to fly on such smaller aircraft. In addition, these smaller aircraft operate under a different set of safety standards than the larger 19-seat turboprop aircraft most frequently used on EAS routes. According to industry representatives, this could negatively affect airlines that spent money to upgrade their aircraft to meet the safety standards now required for the 19-seat aircraft. An official of one EAS carrier that primarily flies 19-seat aircraft indicated that acquiring the infrastructure and personnel to support an additional type of aircraft would be a costly venture and not an option for their company. The EAS program’s current statutory minimum service requirements—- such as providing at least twice-daily service, 6 days per week at EAS communities—potentially add to the cost of providing EAS service. Six of the 17 members of our expert panel who addressed this issue of service frequency believed such a change would make the program more effective. If a community is unable to generate enough passenger traffic to make twice daily, 6-day-per-week service viable for a carrier, even with an EAS subsidy, less frequent service might be more economically viable for the carrier. This change could also reduce the subsidy the carrier requires from DOT, assuming that passengers would adjust to the reduced schedule, and that overall passenger volume would not significantly decline due to increased passenger leakage. Some industry experts we spoke to believed that the current minimum level of service frequency is already so low that it is inconvenient for time- sensitive business travelers, and encourages them to drive to other airports. One airport representative commented that service to one destination, twice a day, does not really fit the definition of “service.” Reducing service frequency might only further reduce a community’s support for EAS service by making that service less available and less useful. Some industry representatives have stated that the 2-year EAS agreements are too short, considering that carriers must lease aircraft for longer periods of time, such as five years. Five of 17 of our panel members identified extending the length of agreements as a way to make the program more effective. In addition, representatives of two airports served by EAS flights noted that carriers are not penalized for poor service— carriers are still compensated when performance is poor or unreliable. Some industry representatives we contacted believed that authorizing DOT to award EAS agreements for longer than 2 years could better assure carriers that they will be able to stay in the program long enough to justify the commitments of financing and equipment that they need to effectively manage EAS service. This change may also attract more carriers willing to participate in the program. Financial incentives could also encourage better service by EAS carriers. In the view of one airport representative, carriers have spread themselves thin as they try to serve many subsidized communities, leading to undependable service, including late arrivals and departures. Incentives, or other means of linking subsidies to performance, can strengthen carriers’ commitment to providing reliable service. DOT and some communities have expressed concerns about lengthening the agreements because DOT would then have less frequent opportunity to remove carriers that are providing poor service—such as a large number of canceled or delayed flights. Instituting longer agreements would also reduce how often a route would be opened to competition, potentially reducing DOT’s ability to manage program costs. DOT officials also pointed out that they could award longer agreements under current legislation, but the program is still subject to annual appropriations. Carrier and industry officials also said they would like EAS agreements to allow DOT to adjust subsidy amounts in response to certain cost increases that occur during these agreements. For example, fuel costs increases in early 2008 affected EAS carriers’ operations. Program reauthorization legislation passed in 2003 allows DOT to adjust carrier compensation in response to increased costs, but DOT has chosen to not use this authority. Among our expert panel, 6 of the 17 individuals who addressed this issue believed allowing renegotiation of EAS agreements in response to rising costs would make the program more effective. Some industry representatives also believe the $200 per-passenger subsidy limit has been in effect for a long time and should be increased, even if only to reflect cost inflation. Allowing renegotiation of EAS agreements in response to rising costs would enable carriers to continue service when they are faced with rising costs rather than file a Notice of Termination which starts the process of reawarding the agreement to serve the community. Industry representatives have said that having to file a termination notice when cost increases make it uneconomic to continue service harms their relationship with the community and adds to the perception that service is unreliable. The proposal to allow an increase in the subsidy per passenger in response to fuel cost increases could allow some communities to retain EAS service—in times of rising fuel prices—which might otherwise lose it if carriers needed higher subsidies to continue that service. However, it could increase program costs faster than they would otherwise increase. Although authorized to do so, DOT generally has not adjusted carrier subsidies for current EAS agreements, because, according to DOT officials, they have limited program funds and reopening agreements could jeopardize funding to continue EAS service for all eligible communities that might qualify for it. A DOT official we spoke with also stated his belief that the $200 per-passenger subsidy cap has been effective as a primary tool to control costs. In addition, almost none of the experts on our panel believed that increasing the $200 per-passenger subsidy cap would make the EAS program more effective. We have also described an option of regionalization—essentially consolidating EAS service to and from a number of closely located EAS communities at a single airport. For example, there are currently 12 pairs of EAS communities that are within 60 miles of each other, and in 5 of these pairs the communities are within 50 miles of each other. The previous administration’s fiscal year 2009 budget request included language that would have supported regionalized air service. However this language was not incorporated in DOT’s appropriation, and was not included in the administration’s fiscal year 2010 budget request for DOT. In more sparsely populated areas, or areas where population has declined, this approach would focus EAS program support on one airport, and could increase the number of passengers using that airport, potentially making the service more viable. With more passengers using the airport, expanding service to include more flights, larger aircraft, or additional destinations, could be another potential benefit. Consolidating service at multiple airports into a single airport may not be initially popular with the communities that would lose EAS service at their local airport; passengers who did use the service provided at those airports would be inconvenienced. Also, some airport representatives and other experts said this option would depend on local circumstances, such as distance between the communities and driving conditions. However, if air service for several communities was consolidated at a single airport, in connection with support for ground transportation between those communities and the airport, it could increase the likelihood that communities would accept the consolidation. If this option is pursued, a nonpartisan commission may need to be established to make the difficult decisions—on an impartial basis—about where to provide EAS service. The existence of leakage demonstrates passengers’ willingness to bypass their local EAS service in favor of traveling to a larger airport that offers more flight options, more direct flights, and lower fares. Currently, to qualify for EAS service, a community must be at least 70 highway miles from the nearest medium- or large-hub airport. In previous reports we discussed the options of both increasing the 70-mile minimum qualifying distance, and including small hubs in this criterion. For instance, DOT information shows three communities with EAS service are within 50 miles of a small-hub airport. As another approach to the same issue, DOT’s fiscal year 2009 budget request proposed ranking EAS-subsidized communities in order of decreasing driving distance to their nearest large- or medium-hub airport, and funding communities starting with the most isolated, and continuing in that order, until funding is exhausted, although this language was not incorporated in the fiscal year 2009 appropriation and not included in the fiscal year 2010 budget request. In addition, 13 of the 17 members of our expert panel who addressed this issue believed extending the qualifying distance from a hub airport above the current 70- mile minimum would make the EAS program more effective. Proposals to extend the minimum qualifying distance from an EAS community to the nearest hub airport, or to otherwise focus EAS program funding on the more remote communities, would allow the EAS program to serve communities with relatively poor transportation access, while accommodating increasing costs and subsidies in an environment of limited program funding. Implementing one of these options would mean some communities that currently have EAS service would lose it, just as past changes in community eligibility requirements have led to some communities being dropped from the program. Also, some officials of community airports caution that basing eligibility on distance from a hub airport should consider local terrain and conditions—even the current 70 miles may not be a practical driving distance in mountain terrain, or where there is hazardous driving in winter. The cost of the EAS program and the number of communities served has grown substantially in recent years, with the potential for more communities seeking service in the near future. Essentially the communities eligible for subsidy would be limited to those receiving subsidy as of a given date. Capping the program at the currently subsidized communities would help contain the program’s total costs. The stable size of the program would make it easier for DOT to manage the program and make funding the program more predictable, while not expelling any community currently receiving benefits under the program. If a community that is currently receiving unsubsidized commercial air service should lose that service, that community would not be able to get EAS subsidized service if this change was implemented. Since communities historically have come into and gone out of the program, the decisions about who would be eligible for subsidies would be based on the effective date selected for this change. Several of the proposed changes to the EAS program may help to address current concerns and enable the program to continue providing air service to communities. However, even with changes to the EAS program, some EAS communities would still have limited demand for the service, due to proximity to other airports or limited population. For such communities, other transportation modes might be more cost effective and practical than EAS service for connecting communities to the transportation network. Our expert panel, in addition to considering changes to the EAS program that would make it more effective, also considered the potential offered by more fundamental changes to the federal government’s approach to supporting intercity transportation for small communities. The 17 members on our panel who addressed this issue all believed that the EAS program needed substantive change to make it more effective in supporting small communities’ access to the national transportation network, and that a multimodal approach to provide financial assistance to small community transportation could potentially be more responsive to communities’ needs. GAO and others have also made proposals that would broaden the government’s approach to small community transportation to include other transportation modes. Proposals include support for other types of transportation besides scheduled air service and other approaches to financial assistance besides subsidies to carriers. For example, as part of the Vision 100-Century of Aviation Reauthorization Act in 2003, Congress authorized a number of changes to the EAS program, including the Community and Regional Choice programs, which allowed DOT to provide financial assistance directly to communities to obtain air taxi service, or pursue other transportation options. According to a DOT official, this program generated almost no interest from communities, perhaps because communities may believe that air service they have under current law is better than the alternatives. We have also proposed similar options that might enable the EAS program to provide less costly and more sustainable service, including better matching air service capacity with community needs by allowing the use of “on demand” service such as air taxis and changing the carrier subsidies into local grants, thus allowing communities more flexibility to determine how to the use the funds to best meet their needs. The previous administration’s fiscal year 2009 budget request also proposed modifying EAS program service requirements to allow program funds to be used for air taxi or charter service, or ground transportation. Congress did not enact any changes in response to this proposal. Most of the panel members thought that allowing the EAS program to fund other types of air service, such as air taxis, would make the program more effective. For communities with low passenger volume, this may be a more practical option than underutilized scheduled service. On-demand service could be more useful to some communities because flight departures would not be constrained by a limited schedule. Also, current EAS routes typically connect a community to just a single destination. On-demand service could still take community passengers to the hub, but it could also go to any airport within the range of the service’s aircraft. These features could make air service more useful to the community, increase demand, and make the operation more commercially viable. However, current EAS statutes require scheduled service by carriers and would have to be revised by Congress to accommodate air-taxi-type services. Additionally, current commercial air taxi services are relatively expensive. It may be hard to predict what such a service would cost under EAS, or the level of subsidy it would require, until it is tried. Alternatively, a community that cannot support EAS service within the subsidy limit might be better served through ground transportation. In many parts of the country, motorcoach companies and passenger rail already deliver passengers to large hub airports. For example, according to an American Bus Association official, motorcoach companies transport more than 2.5 million passengers annually from Maine, Vermont, and New Hampshire to Boston’s Logan Airport. The official said that about half of the communities currently in the EAS program are also served by motorcoach companies, which in some cases even provide community-to- hub airport service that competes with EAS service. If a community cannot support air service even with an EAS subsidy, it may be able to support subsidized motorcoach or other ground transportation. Experts on our panel, as well as others with whom we spoke recognized that there will be difficulties if a multimodal approach to small community transportation is adopted. They noted that a multimodal approach to providing transportation assistance to small communities would likely face opposition from communities if they were to lose air service. In addition, it would create concerns about the potential source of funding because current DOT funding is largely “stove-piped” through funds that support—and are financed by—specific transportation modes. For example, federal funding for airports and aviation primarily comes from the Airport and Airway Trust Fund, which is funded by several aviation- related excise taxes. Federal funding for highways is provided through the Highway Trust Fund which is supported by motor fuel and other vehicle- related taxes. Experts on our panel and others said a multimodal approach can also result in different transportation modes “competing” for funds, as advocates for the various transportation modes may oppose any change that is seen as diverting funds dedicated to one transportation mode to support another. Taking a multimodal approach to small community transportation will require creative approaches to address these concerns. Finally, some of the experts in our panel expected that a true multimodal approach to support small community transportation would require more federal funding than the EAS program alone provides. Over the years, Congress has made incremental changes to the program such as changing the eligibility criteria or funding; however the program’s approach remains little changed since it was implemented 30 years ago. Although Congress, the administration, GAO, and others have proposed potential changes to the EAS program, it is difficult for policymakers to determine which options to select, since different options for modifying the program might affect stakeholders such as airlines and community residents differently. For example, supporting increased use of smaller planes may increase the cost effectiveness of certain routes, but one industry association commented that this would penalize carriers who have made the investment in larger aircraft to satisfy current program requirements. In addition, as some of the panel experts and others recognize, these transportation decisions could become politicized. For example, a regional airport may make sense in certain geographic areas; however, no community would want to lose its local service, along with the assumed prestige and economic benefits to another community. Further, it is difficult to determine which option or suite of options to select, since stakeholders have different opinions on what the program is to achieve. When the program was established in 1978, it provided subsidized air service to communities that were receiving air service at the time and would have lost air service under deregulation, so in one sense, the program supports scheduled air service. However, the legislative history accompanying the Airline Deregulation Act also describes the program as supporting both connectivity to the national air transportation system and the growth and economic development of the communities served. These multiple program objectives make it difficult to assess which options to use. For example, if the objective is to continue providing air service to communities that were receiving air service at the time of deregulation, providing additional funding to cover expected cost increases and renegotiating contracts in response to cost increases like fuel prices could meet that objective. If the objective is to provide cost- effective air service, options such as allowing more flexibility for type of aircraft and service frequency or establishing regional airports might be appropriate. Or if the objective is to provide access to the national transportation system, perhaps a multimodal approach or focusing on the most remote communities might be better options. Changes in the aviation industry and the nation’s financial situation over the past 30 years may make this an opportune time to revisit program objectives and evaluate design options for the program. In 2005, we reported that federal deficits portended an economically unsustainable situation in the long term, making it incumbent upon the federal government to periodically re-examine programs to assure they are able to meet current and future challenges. Certainly, the deficit picture has only grown more critical since then, as has the need for reviewing and updating federal programs to assure their continued effectiveness. In our report 4 years ago, we developed several criteria designed to address whether existing programs are relevant to the challenges of the 21st century, and to support making tough choices in setting priorities. These criteria relate to (1) having well-defined goals with direct links to an identified federal interest and role, (2) defining and measuring program success, (3) targeting benefits, and (4) affordability and cost effectiveness. These criteria, which could be used to re-examine the EAS program, are summarized below and discussed in more detail in appendix III. The EAS program has multiple objectives, which are in some ways conflicting, contributing to a lack of clarity in the federal role. Revisiting the goals and objectives of the EAS program would help define the federal government’s role in the program, that is, what the federal government should be doing and how it should be doing it. For example, defining the EAS program’s objective as subsidizing scheduled commercial air service at communities that would not otherwise have air service, as the program has operated since it began, could lead to one program design and related performance measures addressing such factors as the number of communities with subsidized air service, the cost effectiveness of that service, and various measures for the quality of that service. However, identifying the objective of the program as providing rural and small communities with connectivity, including air service, to the national transportation network—which was also identified as an objective of the EAS program at the time it was enacted—could lead to defining a different set of options not limited to providing subsidized air service, but also considering multiple transportation modes. Supporting the broader objective of connectivity would also be consistent with DOT’s Strategic Plan, which identifies global connectivity as one of the agency’s strategic goals. The performance measures that DOT has established for EAS relate to maintaining uninterrupted service at EAS-subsidized communities and the timeliness of processing agreements and making payments to carriers. Setting additional measurable targets for what the program is intended to accomplish would allow DOT to (1) assess the relative success of the program and (2) more effectively manage program resources toward achieving program goals or determine what level of resources are needed when the program is not achieving its objective. Congress has modified eligibility criteria for the EAS program in the past. In 1978, the list of communities potentially eligible for EAS subsidized service was established. In 1994, Congress added the requirement that a community must be at least 70 miles from the nearest medium- or large- hub airport to qualify for EAS service. Examining the criteria again, given changes in population and the air service industry, may help target the benefits of the program to those communities that have the least access to the national transportation system. Analysis of the cost and affordability of EAS program can support decisions that may need to be made to address how and where to use existing program resources or if options to revise the program are warranted. Given the trend of increasing carrier subsidies and the potential for more communities seeking EAS subsidies if they lose their unsubsidized service, it is important for policymakers to assess whether the EAS program is affordable and financially sustainable over the long term, given known trends and risks. Consolidating service from two or more closely located EAS communities at a single airport is one option that could make service more cost effective. Another option that has the potential to improve cost effectiveness of EAS service for some communities would be to allow more latitude in determining the type of aircraft and flight schedules that would provide the level of service the community needs and can support. Finally, establishing a multimodal approach could provide cost-effective options for connecting people to the national transportation network. Since the EAS program’s basic design is 30 years old, policymakers may want to reconsider the characteristics of communities that are provided with federal transportation assistance. Reconsidering the design of federal programs—-such as the EAS program—requires a variety of information, and methods exist that can help develop such critical data. For example, Geographic Information Systems (GIS) analysis can be used to evaluate community access to transportation—both to air service and to other modes. In general, GIS applications are tools in which varied geographic information is compiled to enable analyses based on the relationship of one element, such as communities, to another element, in this case, modes of transportation. These tools have become critical in the field of transportation planning and management over the past 30 years. Such analyses can be used to evaluate transportation options, and help develop cost-of-service estimates. We analyzed the access that different groups of communities have to the various transportation modes by mapping those communities along with the availability of the transportation modes. The goal was to take a fresh look at community access to transportation networks in the geographic context that exists today—a less rural society and potentially different transportation options than existed 30 years ago when the EAS program was conceived. The goal of our analysis is to use information on community demographics, access to transportation modes, and other relevant factors to illustrate how these key factors could be considered in developing an approach to ensuring access to air service or other modes of transportation. We examined the proximity of the selected communities— community selection depended on community size and distance from medium- or large-hub airports—to transportation modes. We selected communities that had a population of between 10,000 and 500,000 people and that were at least 90 miles from the nearest medium- or large-hub airport. It would have been possible to select different sized communities or those that were either closer or farther from a medium or large hub. For selected communities, proximity to various types of airports, passenger rail stations, and entry ramps onto major highways were considered. This enabled comparisons across the communities as to their relative access to varied transportation modes. In appendix IV we provide outcomes of the analyses we performed to illustrate how GIS analysis can be used to re- evaluate small community transportation options. This type of analysis might help determine the impact of the option to focus EAS assistance on communities that are most distant from alternative hub airports. Also, DOT’s Bureau of Transportation Statistics has taken steps to identify the intermodal connectivity of the population of the United States. In 2005, it published work showing that in 2003 about 93 percent of the rural residents lived within what DOT determined to be a reasonable coverage area of at least one of the four (air, bus, rail, and ferry) intercity public transportation modes. They acknowledge that this access may have diminished because of a recent reduction of Greyhound bus terminals and a portion of an Amtrak line. To get an even better idea of how connected the country is, DOT is continuing to work on an intermodal passenger connectivity project that involves cataloging and geographically plotting all transportation facilities in the United States and indicating what modes serve these facilities to develop a database of this information. While this is an ongoing project, data DOT has available could provide an additional source of information with which to evaluate the extent to which certain communities are connected to the national transportation network. In addition to GIS analysis, the tools and methods of benefit-cost analyses can be used to provide information on economic factors that may be useful in evaluating options. The cost of providing subsidized service to communities may vary considerably depending on the communities’ location or the type of service provided. Developing data to better understand these tradeoffs would help policymakers design the most appropriate program for the current circumstances. For example, estimates of program costs across various alternative modes and the value provided to communities for these services could help to ensure that programs are designed to use funds in the most beneficial way. Specifically, generating information on the expected demand for transportation services from communities could help stakeholders better understand the value gained by citizens from having access to service across various modes. The Government Performance and Results Act of 1993 requires executive agencies to develop a long-term strategic plan, prepare annual performance plans, and measure progress toward the achievement of the goals described in the plans. The annual performance plans should establish the connections between the long-term goals outlined in the agency’s strategic plan and the day-to-day activities of managers and staff. In addition, the goals and measures in the plan should address program results and how programs help the agency progress toward their strategic goals. EAS program performance is difficult to assess beyond providing air service to eligible communities because DOT does not have performance measures that demonstrate the extent to which the program is contributing toward DOT’s strategic goals of connectivity or congestion reduction—the strategic goal where the EAS program is located. Further, the Office of Management and Budget most recently evaluated the EAS program under its Program Assessment Rating Tool in 2006 and found the program does not have enough long-term performance measures that focus on outcomes and meaningfully reflect the purpose of the program. The EAS program’s current annual performance measures include one long-term measure that addresses program performance in a specific way—maintaining continuous air service at 98 percent of eligible communities. Other measures relate to administrative activities, including: (1) the percentage of renewal agreements that are established before the existing agreement expires, (2) the percentage of new agreements processed within 160 days of carriers’ notices to suspend services, and (3) the percentage of payments to carriers that are processed within 15 business days. In 2007, the most recent year DOT published information on its performance in these areas, DOT exceeded its goals for the percentage of new agreements processed within 160 days and renewal agreements established before the existing agreement expires. DOT nearly met its goal for processing payments within 15 business days, and did not meet its goal for maintaining continuous air service at 98 percent of eligible, subsidized communities. DOT’s single long-term performance measure—maintaining continuous air service at 98 percent of eligible communities—- reflects an important aspect of program operations. But additional performance measures, addressing other aspects of program performance, could provide a broader perspective on how the EAS program contributes to DOT’s strategic goals. For many communities, the EAS program provides a valuable connection to the national transportation network. Many EAS routes carry 10,000 or more passengers per year. However, low passenger volume and high subsidies remain the norm for many EAS communities. Changes in the air service industry, including the growth of air travel alternatives provided by low-cost carriers, have changed the environment in which the EAS program operates. However, some legislative EAS program requirements, and the growing cost to operate aircraft for EAS service, contribute to the program’s inability to maintain service to EAS communities. Further, rural population shifts since deregulation, and continuing passenger leakage away from small airports with EAS service combine to limit passenger ridership on EAS flights. These factors contribute to the continuing financial strain on the EAS program which brings its long-term viability into question. A re-examination of the EAS program, assessing options to make the program more sustainable and effective, and the development of performance measures to monitor program performance, may be warranted. Many options to help address the problems and limitations the current program faces exist. However, making these decisions is difficult; and Congress has yet to implement any of these options. These decisions are difficult because no one option may work for all communities. Options to change the program requirements might be necessary to sustain EAS. Further, in some locations it might be beneficial to study air taxi and multi-modal approaches to ensuring small and rural communities are connected to the national transportation network. Finally, if decisions are reached to revise the program design, steps should be taken to implement and monitor the program. For example, if the program design is to be revised the legislation governing the program would need to be revised accordingly. In addition, additional performance measures to evaluate the program may need to be developed. In light of developments related to population shifts, the aviation industry, and the national transportation infrastructure, Congress should consider re-examining the program’s objectives and related statutory requirements and seek information from DOT as needed to support this effort. Such a re- examination could include (1) consideration of the rationale behind existing statutory requirements, such as those for 15-seat, 2-engine, 2-pilot aircraft in EAS service; (2) the possibility of providing greater flexibility as to plane size, frequency of service, eligible communities, or regionalization of service; and (3) the possibility of assessing multimodal solutions for communities. We are recommending that the Secretary of Transportation 1. Evaluate the reasonableness of providing transportation service, whether through unscheduled air service or surface modes of transportation, when these alternatives might better serve communities than current scheduled EAS service, and DOT’s current practices for carrier agreements, including the 2-year duration of agreements, and not renegotiating subsidy amounts in response to quantifiable cost increases. 2. Once decisions are made about any changes to the EAS program, DOT should determine whether additional performance measures are needed to evaluate program outcomes. We provided a draft of this report to DOT for its review and comment. DOT provided technical comments in an e-mail message on July 6, 2009, which we incorporated into this report as appropriate. In reviewing our original recommendation calling for additional performance measures for the EAS program, DOT officials indicated that some performance measures were already in use, and said that they also monitor other performance data, such as passengers served. They acknowledged that additional performance measures would support operational improvement, and stated that they would determine those measures as needed. We believe the implementation of any changes to the EAS program—or how the EAS program is used to provide communities with access—which result from Congressional or DOT action would warrant consideration of additional performance measures. As a result of DOT’s comments and the possibility of changes to the program, we modified our original recommendation. DOT concurred with our revised recommendations. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 7 days from the report date. At that time, we will send copies to appropriate congressional committees, to the Secretary of Transportation, and to appropriate officials within the Office of the Secretary. We will also make copies available to others upon request, and the report will be available at no charge on the GAO Web site at www.gao.gov. If you have any questions about this report, please contact me at (202) 512- 2834 or at [email protected] Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. erald L. Dillingham, Ph.D. To describe the status of the Essential Air Service (EAS) program, we reviewed Department of Transportation (DOT) data on the EAS program, DOT’s agreements with airlines to provide service, and financial data for the program and selected airlines. We also reviewed relevant studies and interviewed industry experts. Our review focused on communities within the 48 states of the continental United States that have received EAS subsidized service. This is because the requirements for communities in Alaska are different than for communities in other states. In addition, EAS subsidized service outside of the contiguous states are not representative of the program in the rest of the United States. We obtained DOT data that represented the characteristics and current status of the program at specific points in time in order to describe trends in EAS service. We obtained DOT data from 2003 through early 2009 on the number of communities served by the EAS program, the subsidies awarded airlines to serve these communities, and the passengers enplaned on EAS flights. We selected 2003 as our base year because that was the first full year DOT required carriers to file air traffic activity in a uniform reporting system. DOT provided the information about EAS communities, associated subsidies, and carrier enplanements in a series of excel schedules. The schedules document EAS service only as of a specific dates and therefore do not represent a continuous picture of service provided under the EAS program. To assess the reliability of the community and subsidy information in the schedules, we selected a random sample of the subsidy award information in the schedules and traced the information back to the DOT order where DOT officially announced its agreement with a carrier to serve an EAS route. DOT issues its orders via its docket, accessible at www.regulations.gov. However, we could not assess the reliability of the carrier’s enplanement data in the schedules. To do so would have required a comprehensive review of DOT orders to identify the carrier serving each route, the destination hub, when the carrier initiated service on each route, and when the carrier either suspended or terminated service. Because the schedules do not represent a continuous picture of service provided under the EAS program, our review of DOT orders would also be incomplete. In addition, during the course of our review, we also found we could not develop trend information on passengers that board (enplane) subsidized EAS flights as well as the agreed-upon subsidies for those flights from available DOT data other than the information DOT provided in the schedules. We also obtained relevant financial data for the EAS program including appropriations and expenditures data. We reviewed relevant legislation to verify the appropriations information but did not have sufficient information to validate the expenditures data. We also obtained data documenting fuel use and cost in 2007 and 2008 for selected airlines from OAG BACK Aviation Solutions, a private contractor that provides online access to U.S. financial, operational, and passenger data with a query- based user interface. FAA does not require smaller airlines to file information on fuel use and cost, so we could only extract fuel data on certain larger airlines providing EAS service. We also compared fare data for routes involving EAS flights with fares on comparable unsubsidized routes, to assess how EAS fares compared to unsubsidized fares. We conducted a literature search to obtain research studies that examine the role of air service in the economic development of small communities and their connections to the national transportation network. Where applicable, the research and studies were reviewed by a GAO economist to determine that the studies were sufficiently reliable for our purposes. We also reviewed previous reports and studies of the EAS program including previous GAO, DOT, and other federal agency reports. We reviewed studies about the national transportation network and how rural communities connect to this network, reports on the rationale for the EAS program, and legislation that established and extended the program. We reviewed relevant regulations and legislation to obtain information on EAS program criteria and requirements for communities to be eligible for subsidized service under the EAS program. Finally, we conducted interviews with DOT officials, industry associations and consultants, airlines and community airports, local governments, and other relevant officials. To identify the factors affecting DOT’s ability to provide service to communities, we reviewed relevant literature, including previous GAO reports as well as other studies of the EAS program and air service to small communities. We identified the factors that limit the capacity of the EAS program to provide subsidized service to communities. We also examined the literature to identify the limitations inherent to small communities, aviation industry trends as well as the EAS program itself. We also analyzed data on fares charged for EAS flights. We held a panel discussion attended by 19 experts on small community air service including airline officials, current and former EAS program administrators, economists, other transportation providers, and state and local officials. We discussed and surveyed these experts on the factors affecting the EAS program and options for providing connectivity to small communities across the country, including (1) the challenges facing air service to communities, (2) the role of the federal government in supporting communities’ access to the national transportation network, and (3) the federal government’s options for supporting small community transportation. We composed this panel of experts representing different types of stakeholders in the EAS program, including program officials. Thus, although individual panel members were not independent, the panel as a whole was balanced for our purposes. See appendix II for a summary of panel responses to questions we submitted to them, as well as a list of the panel participants. We reviewed Geographic Information Systems (GIS) and Bureau of the Census information as well as data from other sources to examine the extent to which the rural and small community population has shifted in the 30 years since the EAS program began. We identified areas where the population has grown as well as areas where the population has decreased. Further, we examined the extent to which selected rural areas are connected to the national transportation network. See appendix IV for further information. We identified options for improving the EAS program through a review of previous GAO reports, and discussions with officials from DOT and industry associations as well as industry consultants. We also identified options in proposed legislation that would affect the EAS program. We discussed these options with our expert panel, industry and program representatives, community officials, and other experts to obtain their views on the viability and feasibility of the options for providing assistance to remote communities and increasing their connectivity to the national transportation network. For example, a national association of airports sent questions we developed to seven of their member airports about their experiences and views of the EAS program and forwarded their responses to us. To identify tools that may help DOT to re-examine and assess the performance of the EAS program, we reviewed literature that discussed options for improving the EAS program as well as GAO reports that discuss methods for re-examining federal programs in light of budget limitations. We reviewed previous GAO reports that discuss our re- examination framework to determine how such a framework could aid DOT in clarifying the strategic goals and options for the EAS program. We further examined DOT’s EAS program data and current performance measures in light of their usefulness for monitoring and managing the program. We conducted this performance audit from March 2008 through July 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Table 2 summarizes responses provided by the members of our expert panel to the questionnaire we administered during the panel sessions. A listing of the panel members follows the summary of questionnaire responses. For each question, rank the three most significant factors, from 1 to 3. Number of panelists who ranked as 1, 2, or 3 1) Which category of challenges is the most significant, in terms of its impact on carriers’ ability to provide air service under the EAS program? (19 panel members addressed this question) Challenges in serving the small community market. Challenges in the air service industry environment. All of the above are equally important 2) Which of the following aspects of providing and sustaining air service to small community markets present the biggest challenges? (19 panelists responded to this question) Small populations limit the market of potential passengers. Limited community business activity limits the market of potential business passengers. Rural and small community populations have shifted in the 30 years since deregulation—the EAS program may not be serving communities that have the greatest need for subsidized air service, in terms of other transportation options they may have. EAS carriers may do insufficient marketing so that local residents are unaware of service. “Leakage,” as small community residents bypass their local airports, and use other options for travel. “Prop avoidance,” or travelers’ reluctance to fly in smaller turboprop aircraft that serve small airports. Inadequate financial support, or other commitment, for EAS service from local government or the business community. Inadequate federal funding for the EAS program. Number of panelists who ranked as 1, 2, or 3 3) What changes in the air service industry environment since deregulation have been the biggest challenges to small community air service, including EAS service? (18 panel members addressed this question) Major carriers shifting to a hub-and-spoke route structure. The expansion of low-cost carriers, creating more opportunities for small community residents to bypass their local airport in favor of lower fares at another airport. Decreasing availability of 19-seat turboprop aircraft used most often by EAS carriers. EAS carriers’ difficulty in obtaining code share agreements with larger carriers that would allow passengers to book connecting flights on those carriers as part of the same trip. Lack of interline arrangements with larger carriers that would allow passengers to check bags to their final destination. Congestion at hub airports, with fewer slots available for small carriers. The growth in business owned or leased aircraft, reducing the need for commercial business travel. Increased post-9/11 security requirements at small airports. 4) What EAS program requirements represent the biggest challenges to providing and sustaining air service to small communities under the EAS program? (16 panel members addressed this question) The $200 per passenger subsidy cap (for communities less than 210 miles from a medium or large airport). The EAS program mandates using 15-seat or larger aircraft. Minimum service requirements of two daily round trips, six days a week. Two-year EAS agreements are too short. No built-in agreement provisions for renegotiating subsidies to reflect rising costs (other than carriers filing a notice to terminate service, in order to negotiate a higher subsidy). Insufficient profit margins (5 percent of operating expenses) allowed by the program. Eligibility criteria—that a community must have had service at the time of deregulation—-has not changed since 1978. Part 2: The role of the federal government in supporting small communities’ access to the national transportation network 1.) Should it be the federal government’s role to provide financial assistance to support small communities’ connection to the national transportation network? Check One. (17 panel members responded to this question) 2) If the federal government should support small community transportation, what is the primary reason for doing so? Check one. (17 panel members responded to this question.) Supporting economic sustainability or growth in those communities. Supporting those communities’ connection to the national transportation network. Both of the above are equally important. 3) Should there be performance goals, or measures of success, established for DOT to meet in carrying out transportation assistance programs, such as the EAS program? Check one. (17 panel members responded to this question.) 4) What performance standards and measurable goals could be established for the EAS program? Check as many that apply. (16 panel members responded to this question.) Standards for access to the national transportation system. Standards for community economic development. 5) In general, do you believe the federal government should prioritize the relative transportation needs for communities, for the purpose of deciding which communities get federal funding? Check One. (16 panel members responded to this question.) 6) Do you believe a system for assessing communities’ relative need for transportation, such as the methodology described by GAO, would be useful for targeting federal transportation assistance to small communities? Check One. (17 panel members responded to this question) Part 3: What are the federal government’s options for supporting small communities’ access to the national transportation network? as 1, 2, or 3 1) Are there any EAS program criteria or requirements that should be revised to make the program more effective in supporting economic development and connectivity in the communities served? Check One. (17 panel members responded to this question) 2) If so, what changes might make the program more effective? Rank the three most significant, from 1 to 3. (17 panel members responded to this question) Increase the passenger subsidy cap from $200. Award EAS agreements for longer time periods (e.g., 5 years). Allow agreements to be renegotiated in response to rising costs. Change criteria to focus program resources on more remote communities (i.e., increase the minimum 70-mile distance from a medium or large hub for a community to qualify). Give carriers more flexibility to use smaller aircraft. Give carriers more flexibility to provide less frequent service. Require carriers to commit funding to local marketing for EAS service. Require carriers to have code share agreements with large carriers at destination hubs, to obtain an EAS agreement. Require carriers to have interline agreements with larger carriers, to obtain an EAS agreement. 3) Does the EAS program need more substantive change or restructuring to make it more effective in supporting small communities’ access to the national transportation network? Check one. (17 panel members responded to this question) 4) If so, what changes would make the program more effective? Rank the three most significant, from 1 to 3. (17 panel members responded to this question) Open the program to more communities by dropping the requirement that a community must have had air service at the time of deregulation in order to qualify for subsidized service. Allow the program to subsidize other types of air service, such as air taxi service, as an alternative to regularly scheduled air service. Give eligible communities the option of getting a grant, in lieu of EAS service, which can be used to obtain other transportation (e.g., subsidizing air taxi, or ground transportation). Require local or state matching funding equal to some percentage of the federal funding. Base continued financial assistance upon meeting minimum performance standards, or other measures of success. Limit the number of years that a community can receive subsidized service under the program. 5) What would be the benefits, if any, of the federal government taking a multi-modal approach to providing financial assistance to small community transportation? Check as many that apply. (17 panel members responded to this question) Potentially more responsive to individual community needs. Potentially a better return in terms of useful services provided for the level of federal investment. May promote local and regional transportation planning. 6) What would be the costs or trade-offs, if any, of the federal government taking a multi-modal approach to providing financial assistance to small community transportation? Check as many that apply. (15 panel members responded to this question) Would require increased federal funding to be effective. Funding may be diverted away from the EAS program. Transportation modes will be competing against each other for funding; decisions on how funding is used will become increasingly politicized. There would be no added costs or trade-offs. In 2005, we reported that federal deficits portended an economically unsustainable situation in the long term, making it incumbent upon the federal government to periodically re-examine programs to assure that they are able to meet current and future challenges. Many current federal programs and policies were designed decades ago to respond to trends and challenges that existed at the time of their creation. Much has changed since then. Therefore, we developed criteria for policymakers to consider as they address emerging needs by weeding out programs and policies that are outdated and ineffective and updating existing programs that are still relevant. We framed the criteria as questions designed to address the legislative basis for the program, its purpose and continued relevance, its effectiveness in achieving goals and outcomes, its efficiency and targeting, its affordability, its sustainability, and its management. We used these criteria to generate specific 21st century questions about those programs and priorities already identified. The resultant 21st century questions illustrate the kinds of issues that a re-examination and review initiative needs to address. Does it relate to an issue of nationwide interest? If so, is a federal role warranted based on the likely failure of private markets or state and local governments to address the underlying problem or concern? Does it encourage or discourage these other sectors from investing their own resources to address the problem? Have there been significant changes in the country or the world that relate to the reason for initiating it? If the answer to the last question is ‘yes,’ should the activity be changed or terminated, and if so, how? If the answer is unclear as to whether changes make it no longer necessary, then ask, when, if ever, will there no longer be a need for a federal role? In addition, ask, “Would we enact it the same way if we were starting over today?” Has it been subject to comprehensive review, reassessment, and re-prioritization by a qualified and independent entity? If so, when? Have there been significant changes since then? If so, is another review called for? Is the current mission fully consistent with the initial or updated statutory mission (e.g., no significant mission creep or morphing)? Is the program, policy, function, or activity a direct result of specific legislation? How does it measure success? Are the measures reasonable and consistent with the applicable statutory purpose? Are the measures outcome based, and are all applicable costs and benefits being considered? If not, what is being done to do so? If there are outcome-based measures, how successful is it based on these measures? Is it well targeted to those with the greatest needs and the least capacity to meet those needs? Is it affordable and financially sustainable over the longer term, given known cost trends, risks, and future fiscal imbalances? Is it using the most cost-effective or net beneficial approaches when compared to other tools and program designs? What would be the likely consequences of eliminating the program, policy, function, or activity? What would be the likely implications if its total funding was cut by 25 percent? When taken together, these questions can usefully illustrate the breadth of issues that can be addressed through a systematic re-examination process. This appendix provides an overview of the GIS analyses we conducted of community access to the transportation network. In this appendix we discuss (1) the motivation for the analysis, (2) some key societal and industry factors that have changed since deregulation, (3) how we generated the set of communities for examination, (4) how an index measuring of “access” was defined, and results for communities’ access to airports, Amtrak, and major roads. It has been approximately 30 years since the EAS program was developed as part of the deregulation of the airline in industry in 1978. The program had the particular goal of ensuring that communities that had commercial airline service in the regulated era retained that service even if the newly deregulated airlines chose not to provide service to some of those locations. Given that goal, the communities that were eligible for the program were essentially those that had had airline service at in 1978. Thirty years later much has changed in the industry and in the country. The country has experienced demographic shifts, automobiles are of better quality, and the airline industry has continually restructured itself. If the EAS program—or any program that promotes access to the national transportation network—is to be re-examined, consideration of these developments is warranted. The goal of our analysis is to use information on community demographics, access to transportation modes, and other relevant factors to provide illustrations of how these key factors could be considered in developing an approach to ensuring access to air service or other modes of transportation. Our intent is not to point to any particular program structure, but rather to illuminate the type of information that can be brought forth to help policymakers answer those questions. Throughout the 20th century, a significant degree of urbanization occurred as people moved out of rural areas and into cities and their suburbs. Although much of this migration occurred during the early and middle parts of the 20th century, the trend has continued. Figure 8 illustrates how rural areas, especially in the Midwest and Great Plains states, lost population between 1980 and 2007. This migration left areas of the country less densely populated than they were 30 years ago when the EAS program was initiated. To the extent that the provision of unsubsidized commercial air service is a function of the size of the local market, information on the shifting settlement patterns might be a useful input into a re-examination of a transportation access program. Along with demographic shifts, the airline industry has changed since deregulation. Airlines have continually restructured their route networks, fleet mix, and pricing structures. New airlines with varied business plans have entered the industry, some airlines have exited the industry (sometimes through bankruptcy), airlines have formed alliances, and the manner in which airlines meet in the marketplace and compete has been dynamic. One of the most significant elements of the industry’s development has been the entry and growth of low-cost carriers over the past decade. These carriers developed different route networks than the so-called “legacy” carriers, used different pricing structures, and generally charged lower fares. Evidence suggests that, to obtain lower fares, passengers are often willing to drive to a distant airport where a low-cost carrier offers service. This availability may thus have created new travel options for residents of remote communities. As noted above, our goal was to evaluate current community access to air and other transportation modes. Here, we define access as the point at which the traveler begins her journey on an airplane, on an interstate highway, or on an intercity passenger train. Because travelers from any given community could be going anywhere in the world, we do not assess access relative to reaching any particular destination. That could be done, say with respect to travel to a major medial facility, and could be appropriate depending on how the transportation needs of a community are framed. Our intent here is to show, in the most general way, how geospatial analysis is a useful analytical tool for analyzing EAS or any other program that aims to provide access to the national transportation network. To allow comparison of communities’ access to transportation modes, we made a number of informed but ultimately arbitrary assumptions about what size communities to include and how to define access to commercial air service in terms of distance to an air embarkation point. An advantage of geospatial analysis is that these thresholds may be easily varied to determine the sensitivity of the results to different assumptions. The analysis we describe illustrates the potential for this approach to understanding access. Specifically, because we know that settlement patterns have shifted since the inception of the EAS program, we examine communities in a contemporary setting. In particular, we considered all urbanized areas— based on the most recent Census information—in the lower 48 states; there are 3,569 urbanized areas. At deregulation in 1978, Congress was specific about which communities would be eligible for subsidies to ensure continuation of scheduled air service—the communities were those that had or were eligible for scheduled air service under the Civil Aeronautics Board’s regulatory regime when the industry was deregulated and airlines were given the ability to choose what routes they would fly. In today’s setting, the underlying concept of which communities should be ensured service might translate as a concern about the vulnerability of communities to loss of commercial air service or an inability of communities to attract commercial air service. So, for our analysis, we asked “Which communities are most likely to encounter difficulty attracting, retaining, or expanding air service?” We did not consider those with fewer than 10,000 people based on the assumption that it would not be feasible, in terms of the federal budget or airline operating capacity, to extend service to many relatively small places. The remaining 1,284 communities, those with populations between 10,000 and less than 500,000, include 36 percent of all urbanized areas and account for about 25 percent of the U.S. population. Within this group of 1,284 communities, there are those that can be considered relatively close to an airport of considerable size, defined as a medium or large hub. While the EAS program uses a 70-mile criterion for that element of eligibility, we ran an analysis using 90 highway miles. This increase in distance was motivated by the general improvement of automobiles over the past 30 years. With this threshold in place the number of urbanized areas in our base-case analysis dropped to 727. Across the 727 communities in the group of interest (with population between 10,000 and less than 500,000 and more than 90 miles from a medium or large hub), there is variation in access to scheduled commercial air service. Some of these communities may be close to small air hubs or have some less frequent commercial service. Others may not have an airfield at all. However, because communities in this set are all distant from the busiest air hubs, their access to air transport is vulnerable to reductions or elimination of nearby commercial service or is precluded by their inability to retain or to attract service at all. Relative to other communities, then, their access to medium- and large-hub airports may be compromised by their remoteness. While the community group of interest has been defined with respect to access to air service, we want to describe the range of travel options available to travelers. So, we consider community access to the interstate highway system and to passenger rail service as well as airline travel. As with air service, access is defined in terms of the driving distance, distance to an on-ramp for interstate access and, for passenger rail, distance to an Amtrak passenger station or to a bus link to a passenger station. Interstate access may mean travel by car or by bus, but that distinction is not made in our analysis because we did not have ready access to bus schedules for the 727 communities. In addition, we did not make any distinctions regarding level of service, including time of day or frequency. For example, if Amtrak stops at a community at 3:00 am, this clearly impacts access, but we did not consider that limitation. Similarly, service at some medium hubs may not be considered very extensive in terms of the number of places one can travel to on a nonstop flight. Geospatial analysis allows us to compute distances to access points for each community, and we can use those distances to measure and compare communities’ access to one or multiple modes. We constructed a set of simple indices that allow characterization of each community’s access to air, rail, and/or train service relative to the other communities. In our analysis, the denominator is the average distance to the transportation mode for the 727 communities. If a community has an index value of 100 for its access to air transportation, it means that its distance from a medium- or large-hub airport, among the communities evaluated, is average. A higher index value signifies a more remote community than average, and a lower index value signifies a community is nearer to that mode than average. Figure 9 shows the 727 communities’ access to medium- or large-hub airports as measured by this index. Communities denoted with triangles are further from a medium- or large-hub airport than is average for the set of communities, and communities denoted with circles are closer to such an airport than is average for the set of communities. We found that the average distance from a medium- or large-hub airport was 173 miles. Of these 727 communities, 454 were within 173 miles and 273 were farther than 173 miles, some as much as 682 miles away. As can be seen, communities that are more remote than the average of 170 miles from air transportation are found mainly in the Intermountain West, the Plain states, the Mississippi Delta, and in Appalachia. Comparing this result with the map documenting shifts in population shows that these areas are also the ones that generally experienced population declines between 1980 and 2007. Figures 10 and 11 show the same communities’ relative access to highways and passenger rail, as represented by index values. Considering access to the interstate highway system (figure 10), for these 727 communities, the average distance to an on-ramp is 33 miles. Sixty-five percent are within 33 miles (circles), with the other 35 percent (triangles) more than 33 miles away, and some as many as 335 miles away. Again, those farthest away from the interstates, in the Plains especially, are also areas that have experienced population loss. For access to passenger rail (figure 11), the patterns are similar to those for access to the interstate. To obtain a perspective on communities that are the most remote, for figure 12 the index is calculated to characterize access across modes. Equal weight is given to access across modes, but it is clearly possible to apply different weights to the separate modes’ index values, reflecting greater emphasis on access to one mode (say, air) versus another. Here again, 60 percent of the 727 communities have better-than-average access to the transportation network (via any mode), while 40 percent are relatively remote. The fact that some communities’ index values are very large demonstrates the heterogeneity in access across the 727, suggesting very different degrees of remoteness even among communities that are distant from medium- or large-hub airports. Because both distance and population density matter in the provision of transportation services, this heterogeneity will figure importantly in weighing the costs and benefits of supporting or subsidizing access to the transportation network. Our analysis identified 727 communities within a range of population of 10,000 to less than 500,000 that are alike in that they do not have ready access to the nation’s busiest medium- and large-air hubs. We then calculated index values that allowed us to characterize the extent of remoteness from interstate highways and passenger rail stops. Different criteria will produce different groupings of communities in terms of how connected they are to the national transportation system. Supplementing the information provided by the index values with knowledge about actual levels of air service (at small hubs or airfields) and about bus and rail service would provide a frame for considering transportation policy goals. One way this analysis can be useful in considering the EAS program specifically is to ask which of these 727 currently are served by EAS (defined as being located within 40 miles of an EAS airport). And, which EAS communities are not included among the 727, that is, how many EAS communities are in proximity to the nation’s busiest air hubs, or have fewer than 10,000 residents? Figure 13 shows that about 17 percent (123) of these communities have EAS service. Recognizing the changes in the structure of the airline industry, a community’s proximity to an airport served by a low-cost carrier might be another way of characterizing access to the air transport network. In Figure 14, we identify which of the 727 communities are within 150 driving miles of such an airport. Here, we find that 92 percent have access to low- cost carrier at airports outside their communities. With respect to the EAS program as it exists today, our analysis suggests that there is heterogeneity across those communities that currently have EAS service in terms of size and distance to the nation’s busiest airports or airports served by low-cost carriers. And, it suggests that there are other communities whose relatively limited access to the air transport network might warrant consideration of alternatives for connection to the nation’s transportation network, whether air or road or rail. In addition to the person named above, Cathy Colwell, Assistant Director; Amy Abramowitz; Richard Brown; Colin Fallon; David Hooper; Don Kittler; Hannah Laufe; John Mingus; Susan Offutt; and Bonnie Pignatiello Leer made key contributions to this report.
Since 1978, the Essential Air Service (EAS) program has subsidized air service to eligible communities that would otherwise not have scheduled service. The cost of this program has risen as the number of communities being served and subsidies to air carriers have increased. At the same time, the number of carriers providing EAS service has declined. Given continuing concerns over the EAS program's long-term prospects, GAO was asked to review the program. GAO reviewed (1) the characteristics and current status of the EAS program, (2) factors affecting the program's ability to provide air service, (3) options for revising the program, and (4) tools for assessing the program, the options for its revision, and the program's performance. GAO interviewed stakeholders and reviewed the results of an expert panel convened by GAO, Department of Transportation (DOT) data and program documentation, and potential methodologies for assessing federal programs. The EAS program has changed relatively little in 30 years, but current conditions raise concerns about whether the program can continue to operate as it has. Over the past 2 years subsidies to carriers have been increasing, along with EAS program obligations to fund those subsidies. In response, the administration is requesting $175 million for the EAS program in fiscal year 2010, a $50 million increase over recent funding levels. At the same time, the number of carriers providing subsidized air service is declining, from 34 in 1987 to 10 in 2009. More than one-third of the EAS-supported communities temporarily lost service in 2008, when 3 carriers ceased operations. Several factors contribute to the increasing difficulty in providing subsidized air service. The EAS program has statutory requirements for minimum aircraft size and frequency of flights, effectively requiring carriers to provide service that may not be "right-sized" for some small markets. Also, the growth of air service especially by low-cost carriers--which today serve most U.S. hub airports---weighed against the relatively high fares and inconvenience of EAS flights, can lead people to bypass EAS flights and drive to hub airports. Moreover, the continued urbanization of the United States may have eroded the potential passenger base in some small and rural EAS communities. While Congress, DOT, GAO, and others have proposed various revisions to the EAS program, Congress has not authorized many changes to program requirements. Proposed Federal Aviation Administration reauthorization legislation would include performance-based incentives, among other changes. GAO and others have suggested increasing flexibility and other changes that could make EAS service more sustainable for smaller communities. Finally, members of an expert panel organized by GAO all believed that small and rural communities would benefit from a multimodal approach to transportation. Generally they believed that other modes of transportation could be more responsive to communities' transportation needs in some cases. Although it is difficult to select options for the EAS program since stakeholders do not always agree on program objectives, certain analytical tools can help policymakers assess the EAS program. Tools include a re-examination framework to revisit the program's objectives, and help evaluate options to make the program more effective. Other analytical tools include an analytical approach GAO developed that, for a sample of small and rural communities, identified their access to different modes of transportation. This approach has the potential for broader application to examinations of communities' access to the national transportation network. Finally, once a change is implemented, performance measures can be used to periodically evaluate program effectiveness.
Inhaling excessive amounts of coal mine dust can cause CWP and other debilitating lung diseases, including chronic obstructive pulmonary disease, which encompasses chronic bronchitis and emphysema. According to NIOSH, it usually takes about 10 to 15 years of exposure to coal mine dust to develop CWP, although cases involving fewer years of exposure have been observed. Once contracted, CWP cannot be cured, making it critical to prevent the development of this disease by limiting miners’ exposure to coal mine dust. MSHA is responsible for protecting miners by enforcing the provisions of the Federal Mine Safety and Health Act of 1977 (Mine Act), as amended. Under this law, MSHA has a number of responsibilities, including setting new safety and health standards and revising existing standards, approving training programs for mine workers, and developing regulations regarding training requirements for rescue teams, among other things. MSHA also conducts periodic inspections of coal mines and, along with coal mine operators, periodically collects samples of coal mine dust to determine compliance with the exposure limit. MSHA set the current exposure limit for coal mine dust at 2.0 mg/m. This limit applies to the overall level of dust in the mine environment; specifically, it provides that each mine operator “shall continuously maintain the average concentration of respirable dust in the mine atmosphere during each shift to which each miner in the active workings of each mine is exposed” at or below that level. To measure the level of dust in the mine environment, MSHA requires that mine operators collect samples of dust in specific areas of the mine and for designated occupations. Designated occupations are those that have the greatest concentration of coal mine dust, as determined through MSHA sampling. then took effect. Pub. L. No. 91-173, § 202(b), 83 Stat. 742, 760-61 (1970). The 1977 Mine Act did not change the 2.0 mg/m. 45 Fed. Reg. 23,990, 24,001 (Apr. 8, 1980) (codified at 30 C.F.R. § 70.100). incurring any disability from… occupation-related disease during or at the end of such period.” NIOSH shares some responsibility with MSHA for improving mine safety and protecting miners’ health. For example, NIOSH conducts research on the causes of work-related diseases and injuries; researches, develops, and tests new technologies and equipment designed to improve mine safety; and recommends occupational safety and health standards, such as the exposure limit for coal mine dust. NIOSH also administers the Coal Workers’ X-ray Surveillance Program—a medical monitoring and surveillance program designed to detect and prevent lung disease. This program requires mine operators to provide up to three initial chest x-rays for coal miners within specified time frames after their employment begins. Miners then can opt to have periodic chest x-rays approximately every 5 years thereafter. NIOSH uses this program for disease surveillance, which includes tracking trends, setting prevention and intervention priorities, and assessing prevention and intervention efforts. In addition, to estimate the prevalence of lung disease among underground coal miners and to study the relationship between miners’ lung disease and their level of exposure to coal mine dust, NIOSH developed the National Study of Coal Workers’ Pneumoconiosis. In this study, NIOSH collected and analyzed epidemiological data—including findings from chest x-rays, results of lung function tests, and occupational and smoking histories—from a sample of coal miners across all major coalfields in the United States between 1969 and 1988. The data also allowed researchers to link to results of coal mine dust sampling over approximately the same period to estimate dust exposures for individual miners. According to NIOSH, epidemiological studies examining the relationship between coal mine dust and disease must contain a sufficiently large body of data over a time period that is adequate to derive reliable findings. There are two primary types of underground coal mining in the United States: continuous mining and longwall mining. In continuous mining, a machine called a continuous miner cuts out rooms of coal from the coal bed leaving a series of pillars of coal to help support the mine roof. In addition to the pillars of coal, bolts are driven into the roof of the mine to help support the mine. The extracted coal from the continuous miner is loaded into shuttle cars for transport out of the mine. In longwall mining, a machine called a shearer moves back and forth across a wall of coal. After the coal is cut, a machine crushes it into small pieces and a conveyer belt removes it from the mine. While the shearer cuts the coal and the coal is extracted, the roof is held up temporarily with self- advancing hydraulic supports. While both types of mining produce dust, certain pieces of machinery produce more coal mine dust than others. For example, in continuous mining, the continuous miner, roof bolting machines, and shuttle cars generate the most dust. Major sources of dust in longwall mining include the shearer and crusher. Appropriately, MSHA did not use NIOSH’s surveillance data as the basis for its proposed new coal mine dust limit, although the data served to inform MSHA’s decision to take action. In the preamble to its proposed rule, MSHA cited an increase in the prevalence of CWP among underground coal miners based on NIOSH surveillance data, which may have led many to believe that these data were part of the basis for MSHA’s proposed change in the exposure limit. The surveillance data showed that the prevalence of CWP, which declined substantially between 1970 and the late 1990s, increased for several years in the early 2000s before declining again between 2005 and 2009. According to MSHA and NIOSH officials, information about the increasing prevalence of CWP based on the surveillance data was mentioned in the preamble to the proposed rule to show that black lung disease still exists among active underground coal miners, thus helping to compel MSHA to take action to reduce miners’ exposure to dust, in accordance with its duties under the Mine Act. However, as we reported in August 2012, the data MSHA used to support its proposal were from two reports, which relied on six epidemiological studies, not the surveillance data. In addition, in a 1996 notice in the Federal Register, well before the increase in the prevalence of CWP shown by the surveillance data, MSHA stated its intent to respond to a 1995 NIOSH recommendation to lower the exposure limit for coal mine dust by developing a proposed rule. dust because of some important limitations of the data. For example, because the data do not include individual miners’ past exposures to coal mine dust, they cannot be used to estimate disease risk for individual miners. Based on principles of epidemiology and statistical modeling, measures of past exposures to coal mine dust are critical to assessing the relationship between miners’ cumulative coal mine dust exposure and their risk of developing CWP. Also, because there is no active selection of miners by researchers and participation in the surveillance program is voluntary, miners who choose to participate may differ in unknown ways from those who choose not to participate, which could result in an overestimation or underestimation of the prevalence of disease. This methodological limitation is known as participation bias, and there are many ways it could affect the prevalence of disease indicated by the surveillance data. For example, the prevalence of CWP could be underestimated because some miners may decline further x-ray screenings once CWP is detected. Alternatively, the prevalence of CWP could be overestimated because miners may be more likely to participate after years of dust exposure, when they believe they are at risk of developing CWP. Experts identified various engineering controls that could further reduce the overall level of coal mine dust, but they said reductions would likely be incremental. Since 1968, the mining industry has achieved significant reductions in the level of dust in underground coal mines. Average dust levels declined from about 7 mg/m in 1968 to below 2 mg/mreplace contaminated air with fresh air, which also helps reduce the level of coal mine dust in the air. The experts also described ways that operators use water to prevent dust from being generated in mining operations. For example, operators spray water on the surface of the coal and on the machines’ cutting surfaces as the coal is being cut to reduce the amount of dust generated. In some cases, operators also infuse water into the coal prior to cutting, but the experts reported limited success with that approach. Operators also use hygroscopic salts on mine floors to help maintain the moisture content of the mine floor, which in turn absorbs coal mine dust. The experts said that, with the increased productivity of mines in recent years, the water quantity and pressure for sprays may need to be increased. From 1978 to 2007, the amount of coal produced per work hour has more than tripled. The experts cautioned, however, that using too much water could have adverse impacts, such as causing conveyor belts to slip, which would affect production. The experts pointed to a number of factors that could limit mine operators’ use of engineering controls aimed at reducing coal mine dust in the mine environment. The experts said that fundamental differences between continuous mining and longwall mining operations render some technological approaches useful for one type of mining, but not both. For example, while scrubbers were cited as an effective tool for reducing dust in continuous mining operations, the experts cautioned that they may not be as effective for mines that require a lot of ventilation, such as longwall mines, because the amount of air flowing through the mine can overwhelm the scrubber. An expert noted that one way to reduce dust levels is to operate only one continuous mining machine at a time in a section of a mine instead of more than one. However, he estimated that this could significantly decrease the productivity of the mine and increase the cost of producing the coal. Another expert noted that controlling dust in a cost-effective manner requires some flexibility. The experts did not quantify the cost of some of the technologies used in reducing dust levels because dust control is not the only purpose of some of the technologies, and future technologies have not been developed enough to fully determine their costs. However, they did identify primary cost drivers. For example, mines that contain high levels of gas must ventilate significant amounts of fresh air, which also helps lower coal mine dust levels. They also noted that while NIOSH has done some of the major research on dust control, overall, industry research on dust control technologies has declined. One expert made the point that research is directly proportional to the economic health of the coal industry. When the industry is contracting economically, manufacturers may not be willing to devote resources to research and development. While specific costs were difficult to assign, the primary drivers the experts identified for lowering dust levels were the cost of maintaining equipment; the cost of purchasing new or additional equipment, materials, and labor; and the cost of providing training. The experts identified options that could reduce individual miners’ exposure to respirable coal mine dust, specifically, personal protective equipment and administrative controls. However, they noted that these options would not help mines reduce the overall level of coal mine dust in the mine environment, and therefore would not help mine operators comply with MSHA’s exposure limit. Personal protective equipment includes items such as respirators and air stream helmets. Respirators filter the air that an individual miner breathes and air stream helmets actively filter and push air across a miner’s face. While respirators and air stream helmets could reduce the amount of coal dust to which individual miners are exposed, the experts noted that miners have concerns that these devices limit communication between miners and thus could raise safety issues. The experts also said that personal dust monitors could be used to reduce individual miners’ exposure to dust because they provide workers with real time data on dust levels in the area of the mine in which they are working. This information allows workers to adjust their position in the mine to reduce their exposure to coal mine dust. Although personal dust monitors have no effect on dust levels in the mine, the experts noted that they may provide data that could be used by mine operators to identify problem areas in the mines and to change work practices to reduce miners’ exposure to coal mine dust. One of the experts told us personal dust monitors cost about $13,000 to $18,000 per unit, which could be a significant expense if all miners were outfitted with them. The experts also noted that administrative controls could limit miners’ exposure to coal mine dust, although they do not control the overall level of dust in the mine. These controls include rotating workers more frequently from positions that are exposed to higher levels of dust, cutting the coal using a remote control device, and changing the sequence by which coal is cut. Experts said that rotating workers to other positions could help reduce their exposure to dust, but this could also require changes to the current collective bargaining agreement at unionized mines because the jobs that involve highest exposure to dust may also pay more. This approach may also increase costs for mine operators because, for example, the collective bargaining agreement might require them to continue to pay workers a higher rate of pay when they rotate to a lower paying position. The experts also said that some mines use remote control devices to keep miners farther away from the source of dust. Controlling the mining machine from a greater distance than is currently done may require high resolution imaging equipment. One expert said that this is especially true for mining operations where geologic conditions change frequently, requiring miners to make judgments about where to move the machinery. According to the experts, another way to limit exposure to individual miners in continuous mining is by modifying the sequence in which coal is mined. The experts explained that this approach reduces the number of miners who are downwind from the dust generated by the continuous mining machine. However, this type of change could result in decreased productivity in the mine. We provided a draft of this report to the Secretaries of Labor and Health and Human Services for review and comment. Both agencies generally concurred with the findings of the report, but provided no formal written comments. The agencies did, however, provide technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees and the Secretaries of Labor and Health and Human Services. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. The objectives of our review were to: (1) determine the extent to which the Mine Safety and Health Administration (MSHA) used recent coal workers’ pneumoconiosis (CWP) trend data as a basis for its proposed exposure limit on coal mine dust, and (2) obtain experts’ views on ways to lower the level of dust in coal mines, including their associated advantages, disadvantages, and cost. To address our first objective, we reviewed MSHA’s Notice of Proposed Rulemaking, including the proposed exposure limit and related documents, updated the literature search from our prior report, and interviewed officials from MSHA and the National Institute for Occupational Safety and Health (NIOSH) to identify recent data on the prevalence of coal worker respiratory diseases. Two GAO research methodologists and one public health specialist reviewed information gathered about CWP trend data, to assess its strengths and limitations, and reviewed a recent study by NIOSH researchers on the usefulness of these data for estimating disease prevalence. We also reviewed our prior report and the analyses that supported it, and interviewed MSHA and NIOSH officials to determine what role, if any, recent CWP trend data had in developing the proposal to lower the exposure limit. We examined whether these data would have been appropriate for MSHA to use in developing its proposed exposure limit using principles of social science research and epidemiology. For our second objective, we worked with the National Academies to convene a group of experts to obtain their views on these issues. To prepare for our discussions with experts, we reviewed NIOSH and other studies on the ability of currently available and alternative technologies to control coal mine dust. We also reviewed the technological and economic feasibility assessments MSHA used to develop its proposed exposure limit. The group included experts from all of the major stakeholder groups: NIOSH researchers, academics, other technical experts, individuals from companies that manufacture mining equipment, and individuals who represent coal mine operators and workers. In identifying the experts, the National Academies compiled a preliminary list of 53 experts who represented 17 universities, 7 coal companies or coal associations, 5 equipment manufacturers, 1 mine workers’ association, and 2 government agencies. The nominees were grouped by sector and field of expertise, and were vetted by 10 individuals working in the public, private, and academic sectors who have expertise in coal mining or a related field. Feedback from these individuals, along with biographical information about the experts, was used to prioritize the experts within each sector and field of expertise. Using this information, we invited 17 experts to participate in a 1-day panel discussion, although 1 person subsequently cancelled. The resulting 16 experts included 3 representatives of mine operators, 1 representative of underground coal miners, 3 representatives of equipment manufacturers, 6 academics, and 3 representatives of federal government agencies. To ensure that there were no unforseen biases or conflicts of interest, each panelist reported to the National Academies his or her investments, sources of earned income, organizational positions, relationships, and other circumstances that could affect, or could be viewed to affect, his or her view on the topic of methods for reducing the level of respirable dust in underground coal mines. We asked the experts to discuss the technological and other options available for lowering the level of dust in coal mines below the existing permissible exposure limit and the costs, advantages, and disadvantages of these technologies. We did not ask the experts about the proposed new limit. In addition to the 16 panelists, we allowed 5 observers to sit in on the panel discussion. The observers included representatives of mining equipment manufacturers, coal mine operators, and one government agency. We conducted this performance audit from July 2013 to April 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Revae Moran, (202) 512-7215 or [email protected]. In addition to the contact listed above, individuals making key contributions to this report were Mary Crenshaw (Assistant Director), Nabajyoti Barkakati, Russell Burnett, Sarah Cornetto, Andrea Dawson, Timothy Guinane, Kristy Kennedy, Kathy Leslie, Sheila McCoy, Sara Pelton, Tim Persons, Martin Scire, Sushil Sharma, Walter Vance, Kathleen van Gelder, and Shana Wallace.
Underground coal miners face the threat of being overexposed to coal mine dust, which can cause CWP and other lung diseases, collectively referred to as black lung disease. In October 2010, MSHA—the federal agency responsible for setting and enforcing mine safety and health standards—proposed lowering the exposure limit for respirable coal mine dust to reduce miners' risk of contracting black lung. In August 2012, GAO reported that the evidence MSHA used supported its conclusion that lowering the exposure limit on coal mine dust would reduce miners' risk of disease. However, some have questioned whether and how recent NIOSH trend data on CWP were used in developing the proposed limit. In May 2013, GAO was asked to provide additional information on MSHA's proposal. GAO examined (1) the extent to which MSHA used recent CWP trend data as a basis for its proposed exposure limit, and (2) expert views on ways to lower the level of dust in coal mines, including their associated advantages, disadvantages, and cost. GAO reviewed MSHA's proposal and related documents; updated a previous GAO literature search; interviewed MSHA and NIOSH officials; and, with the help of the National Academies, convened a group of experts knowledgeable about underground coal mining and methods for reducing coal mine dust. GAO is not making any recommendations in this report, and MSHA and NIOSH both generally concurred with the findings. The Department of Labor's Mine Safety and Health Administration (MSHA) appropriately did not use recent trend data on coal workers' pneumoconiosis (CWP) as a basis for its proposal to lower the permissible exposure limit for respirable coal mine dust. These recent data from the Department of Health and Human Services' National Institute for Occupational Safety and Health (NIOSH) are inappropriate for this purpose because they do not include the types of detailed information about individual miners needed to estimate the likelihood that miners would develop CWP at different exposure levels, such as historical dust exposures. MSHA primarily based its proposed new limit on two reports and six epidemiologic studies, which each concluded that lowering the limit on exposure to coal mine dust would reduce miners' risk of developing disease. MSHA's proposed coal mine dust limit was supported by these reports and studies because, unlike recent CWP trend data, they included information needed to conduct a reliable epidemiological analysis of disease risks associated with different levels of exposure to coal mine dust. Experts identified various approaches that could incrementally reduce overall coal mine dust levels as well as individual miners' exposure to dust. They said that air and water are the primary engineering controls used to reduce overall coal mine dust levels in the mine environment, which are used in various mining equipment, such as sprays. The experts also said that no one technology or approach would result in substantially lower dust levels, but instead could have a cumulative impact if used together. They also noted that all the approaches may not be effective in all types of mines, and that there are a number of cost drivers that would have to be considered, such as machine maintenance and training. The experts also identified other approaches, such as personal protective equipment and administrative controls, which could reduce individual miners' exposure to dust. Personal protective equipment includes respirators and air stream helmets; administrative controls include rotating workers and using remote control devices. However, they noted that these approaches would not help mine operators comply with MSHA's exposure limit because they would not reduce the overall level of coal mine dust in the mine environment.
The U.S. military has long used contractors to provide supplies and services to deployed U.S. forces, and more recently contractors have been involved in every major military operation since the 1991 Gulf War. However, the scale of contractor support DOD relies on today in Iraq and elsewhere throughout Southwest Asia has increased considerably from what DOD relied on during previous military operations, such as Operation Desert Shield/Desert Storm and in the Balkans. Moreover, DOD’s reliance on contractors continues to grow. In December 2006, the Army alone estimated that almost 60,000 contractor employees supported ongoing military operations in Southwest Asia. In October 2007, DOD estimated that the number of contractors in Iraq to be about 129,000. By way of contrast, an estimated 9,200 contractor personnel supported military operations in the 1991 Gulf War. Factors that have contributed to this increase include reductions in the size of the military, an increase in the number of operations and missions undertaken, and DOD’s use of increasingly sophisticated weapons systems. DOD uses contractors to meet many of its logistical and operational support needs during combat operations, peacekeeping missions, and humanitarian assistance missions. Today, contractors located throughout Southwest Asia provide U.S. forces with such services as linguist support, equipment maintenance, base operations support, and security support. In Iraq and Afghanistan, contractors provide deployed U.S. forces with communication services; interpreters who accompany military patrols; base operations support (e.g., food and housing); weapons systems maintenance; intelligence analysis; and a variety of other support. Contractors also provide logistics support such as parts and equipment distribution, ammunition accountability and control, port support activities, and support to weapons systems and tactical vehicles. For example, in Kuwait and Qatar the Army uses contractors to refurbish, repair, and return to the warfighters a variety of military vehicles, such as the Bradley Fighting Vehicle, armored personnel carriers, and the High- Mobility, Multi-Purpose Wheeled Vehicle (HMMWV). Since our initial work on the use of contractors to support deployed forces in 1997, DOD has taken a number of actions to implement recommendations that we have made to improve its management of contractors. For example, in 2003 we recommended that the department develop comprehensive guidance to help the services manage contractors’ supporting deployed forces. In response to this recommendation, the department issued the first comprehensive guidance dealing with contractors who support deployed forces in October 2005. Additionally, in October 2006, DOD established the office of the Assistant Deputy Under Secretary of Defense for Program Support to serve as the office with primary responsibility for contractor support issues. This office has led the effort to develop and implement a database which, when fully implemented, will allow by-name accountability of contractors who deploy with the force. These database implements recommendations we made in 2003 and 2006 to enhance the department’s visibility over contractors in locations such as Iraq and Afghanistan. DOD leadership needs to ensure implementation of and compliance with existing guidance to improve the department’s oversight and management of contractors supporting deployed forces. Several long-standing challenges have hindered DOD’s management and oversight of contractors at deployed locations, even though in many cases DOD and its components have developed guidance related to these challenges. These challenges include failure to follow long-standing planning guidance, ensure an adequate number of trained contract oversight and management personnel, systematically collect and distribute lessons learned, and comprehensively train contract oversight personnel and military commanders. We have found several instances where poor oversight and management of contractors has led to negative monetary and operational impacts. Based on our previous work, we believe for DOD to improve its oversight and management of contractors supporting deployed forces in future operations and ensure warfighters are receiving the support they rely on in an effective and efficient manner, DOD leadership needs to ensure implementation of and compliance with existing guidance to improve the department’s oversight and management of contractors supporting deployed forces. DOD has taken a number of steps over the last several years to improve and consolidate its long-standing guidance pertaining to the use of contractors to support deployed force. Moreover, largely in response to the recommendation in our 2006 report, DOD established the office of the Assistant Deputy Under Secretary of Defense (Program Support) within the office of the Deputy Under Secretary of Defense for Logistics and Materiel Readiness to serve as the focal point to lead DOD’s efforts to improve contract management and oversight. However, as we reported in 2006, although the issuance of DOD’s new guidance was a noteworthy improvement, we found little evidence that DOD components were implementing this guidance or much of the additional guidance addressing the management and oversight of contractors supporting deployed forces. For example, additional DOD and service guidance requires, among other things, the collection of lessons learned, the appointment of certified contracting officer’s representatives, and that all personnel receive timely and effective training to ensure they have the knowledge and other tools necessary to accomplish their missions. Given DOD’s continued difficulties meeting these requirements, it is clear that guidance alone will not fix these long-standing problems. Therefore, we believe that the issue is now centered on DOD providing the leadership to ensure that the existing guidance is being implemented and complied with. As we have noted in previous reports and testimonies, DOD has not followed long-standing planning guidance, particularly by not adequately factoring the use and role of contractors into its planning. For example, we noted in our 2003 report that the operations plan for the war in Iraq contained only limited information on contractor support. However, Joint Publication 4.0, which provides doctrine and guidance for combatant commanders and their components regarding the planning and execution of logistic support of joint operations, stresses the importance of fully integrating into logistics plans and orders the logistics functions performed by contractors along with those performed by military personnel and government civilians. Additionally, in our 2004 report, we noted that the Army did not follow its planning guidance when deciding to use the Army’s Logistics Capabilities Augmentation Program (LOGCAP) in Iraq. According to Army guidance, integrated planning is a governing principle of contractor support, and for contractor support to be effective and responsive, its use needs to be considered and integrated into the planning process. Proper planning identifies the full extent of contractor involvement, how and where contractor support is provided, and any responsibilities the Army may have in supporting the contractor. Additional Army guidance stresses the need for the clear identification of requirements and the development of a comprehensive statement of work early in the contingency planning process. Because this Army guidance was not followed, the plan to support the troops in Iraq was not comprehensive and was revised seven times in less than 1 year. These revisions generated a significant amount of rework for the contractor and the contracting officers. Additionally, time spent reviewing revisions to the task orders is time that is not available for other oversight activities. While operational considerations may have driven some of these changes, we believe others were more likely to have resulted from ineffective planning. The lack of planning also impacts the post-award administration of contracts. For example, in our 2004 report, we noted that one reason the Army was unable to definitize the LOGCAP task orders was the frequent revisions to the task orders. Without timely definitization of task orders, the government is less able to control costs. Our 2003 report also concluded that essential contractor services had not been identified and backup planning was not being done. DOD policy requires DOD and its components to determine which contractor-provided services will be essential during crisis situations and to (1) develop and implement plans and procedures to provide a reasonable assurance of the continuation of essential services during crisis situations and (2) prepare a contingency plan for obtaining the essential service from an alternate source should the contractor be unable to provide it. According to DOD Instruction 3020.37, commanders have three options if they cannot obtain reasonable assurance of continuation of essential contractor service: they can obtain military, DOD civilian, or host nation personnel to perform the services, they can prepare a contingency plan for obtaining essential services, or they can accept the risk attendant with a disruption of services during crisis situations. However, our review found that essential contractor services had not been identified and backup planning was not being done. Without firm plans, there is no assurance that the personnel needed to provide the essential services would be available when needed. Moreover, because DOD and its components have not reviewed contractor support to identify essential services, the department lacks the visibility needed to provide senior leaders and military commanders with information on the totality of contractor support to deployed forces. As we noted in 2003 and 2006, having this information is important in order for military commanders to incorporate contractor support into their planning efforts. For example, senior military commanders in Iraq told us that when they began to develop a base consolidation plan for Iraq, they had no source to draw upon to determine how many contractors were on each installation. Limited visibility can also hinder the ability of commanders to make informed decisions regarding base operations support (e.g., food and housing) and force protection for all personnel on an installation. Similarly, we found that limited visibility over contractors and the services they provide at a deployed location can hinder the ability of military commanders to fully understand the impact that decisions such as restrictive installation access and badging requirements can have on the ability of contractors to provide services. As noted above, DOD has taken some steps to improve its visibility over contractor support. In addition, according to a October 2007 DOD report to Congress on managing contractor support to deployed forces, the department is developing a cadre of contracting planners whose primary focus will be to review contractor support portions of combatant commanders’ operations plans and contingency plans, including the requirements for contractor services. As we noted in several of our previous reports, having the right people with the right skills to oversee contractor performance is crucial to ensuring that DOD receives the best value for the billions of dollars spent each year on contractor-provided services supporting forces deployed to Iraq and elsewhere. Since 1992, we designated DOD contract management as a high-risk area, and it remains so today, in part, due to concerns over the adequacy of the department’s acquisition workforce, including contract oversight personnel. While this is a DOD-wide problem, having too few contract oversight personnel presents unique difficulties at deployed locations given the more demanding contracting environment as compared to the United States. Although we could find no DOD guidelines on the appropriate number of personnel needed to oversee and manage DOD contracts at a deployed location, several reviews by GAO and DOD organizations have consistently found significant deficiencies in DOD’s oversight of contractors due to an inadequate number of trained personnel to carry out these duties. In 2004, we reported that DOD did not always have enough contract oversight personnel in place to manage and oversee its logistics support contracts such as LOGCAP and the Air Force Contract Augmentation Program (AFCAP). As a result, the Defense Contract Management Agency was unable to account for $2 million worth of tools that had been purchased using the AFCAP contract. The following year, we reported in our High-Risk Series that inadequate staffing contributed to contract management challenges in Iraq. During our 2006 review, several contract oversight personnel we met with told us DOD does not have adequate personnel at deployed locations. For example, a contracting officer’s representative for a linguistic support contract told us he had only one part-time assistant, limiting his ability to manage and oversee the contractor personnel for whom he was responsible. The official noted that he had a battalion’s worth of people with a battalion’s worth of problems but lacked the equivalent of a battalion’s staff to deal with those problems. Similarly, an official with the LOGCAP Program Office told us that the office did not prepare to hire additional budget analysts and legal personnel in anticipation of an increased use of LOGCAP services due to Operation Iraqi Freedom. According to the official, had adequate staffing been in place early, the Army could have realized substantial savings through more effective reviews of the increasing volume of LOGCAP requirements. More recently, we reported that the Army did not have adequate staff to conduct oversight of an equipment maintenance contract in Kuwait. During our review of the contract, we found that vacant authorized oversight personnel positions included a quality assurance specialist, a property administrator, and two quality assurance inspectors. Army officials also told us that in addition to the two quality assurance inspectors needed to fill the vacant positions, more quality assurance inspectors were needed to fully meet the oversight mission. According to Army officials, vacant and reduced inspector and analyst positions meant that surveillance was not being performed sufficiently in some areas and the Army was less able to perform data analyses, identify trends in contractor performance, and improve quality processes. In addition to our work, a number of other reviews of DOD’s contractor oversight personnel have identified similar problems. A 2004 Joint Staff review of the Defense Contract Management Agency’s responsiveness and readiness to support deployed forces found that the agency had not programmed adequate resources to support current and future contingency contract requirements. The review also found that the Defense Contract Management Agency manpower shortages were aggravated by internal policies that limited the ability of personnel to execute those missions. More recently, the 2007 report of the Commission on Army Acquisition and Program Management in Expeditionary Operations stated that the Army lacks the leadership and military and civilian personnel to provide sufficient contracting support to either expeditionary or peacetime missions. According to the commission, Army contracting personnel experienced a 600 percent increase in their workload and are performing more complex tasks, while the number of Army civilians and military in the contracting workforce has remained stagnant or declined. As a result, the commission found that the vital task of post-award contract management is rarely being done. As we noted in our 2006 report, without adequate contract oversight personnel in place to monitor its many contracts in deployed locations such as Iraq, DOD may not be able to obtain reasonable assurance that contractors are meeting their contract requirements efficiently and effectively. However, some actions have been taken since our report to address the issue of inadequate numbers of trained contract oversight and management personnel. For example, in February 2007, the Deputy Assistant Secretary of the Army (Policy and Procurement) issued guidance that for service contracts greater that $2,500, the contracting officer shall appoint certified contracting officer’s representatives in writing, identify properly trained contracting officer’s representatives for active service contracts, and ensure that a government quality assurance surveillance plan is prepared and implemented for service contracts. In addition, Congress has taken steps to improve oversight by increasing the budgets for the Defense Contract Audit Agency, Defense Contract Management Agency, and the Defense Department’s Inspector General in the fiscal year 2008 Defense Department Appropriations. Although DOD and its components have used contractors to support deployed forces in several prior military operations, DOD does not systematically ensure that institutional knowledge regarding the use of contractors to support deployed forces, including lessons learned and best practices, is shared with military personnel at deployed locations. We previously reported that DOD could benefit from systemically collecting and sharing its institutional knowledge to help ensure that it is factored into planning, work processes, and other activities. We have also made several recommendations that, among other things, called for DOD to incorporate lessons learned from its experience in the Balkans to improve the efficiency and effectiveness of the Army’s LOGCAP contract, implement a departmentwide lessons-learned program to capture the experiences of military units that have used logistics support contracts, and establish a focal point within the Office of the Under Secretary of Defense to lead and coordinate the development of a departmentwide lessons-learned program to collect and distribute the department’s institutional knowledge regarding all forms of contractor support to deployed forces. Although DOD has policy requiring the collection and distribution of lessons learned to the maximum extent possible, we found in our previous work that no procedures were in place to ensure that lessons learned are collected and shared. For example, DOD has established the Joint Lessons Learned Program, designed to enhance joint capabilities through discovery, knowledge development, implementation, and sharing of lessons learned from joint operations, training events, exercises, and other activities. The program applies to the Joint Staff, combatant commands, services, and combat support agencies that are to coordinate activities and collaboratively exchange lesson observations, findings, and recommendation to the maximum extent possible. According to DOD policy, combatant commands are responsible for executing and supporting joint lessons learned functions including lesson discovery, knowledge development, and implementation activities. U.S. Joint Forces Command is responsible for developing and implementing the capability to collect and analyze observations from current operations and ensuring key findings are appropriately disseminated. The Army regulation which establishes policies, responsibilities, and procedures for the implementation of the LOGCAP program makes customers that receive services under the LOGCAP contract responsible for collecting lessons learned. Nonetheless, we have repeatedly found that DOD is not systematically collecting and sharing lessons learned on the use of contractors to support to deployed forces. Despite years of experience using contractors to support forces deployed to the Balkans, Southwest Asia, Iraq, and Afghanistan, DOD has made few efforts to leverage this institutional knowledge. As a result, many of the problems we identified in earlier operations have recurred in current operations. In 2004, we reported that despite over 10 years of experience in using logistics support contracts, the Army continued to experience the same types of problems it experienced during earlier deployments that used LOGCAP for support. For example, we found that U.S. Army, Europe, which has had the most experience in using logistics support contracts, has not consolidated its lessons learned and made them available for others. Similarly, we learned that a guidebook developed by U.S. Army, Europe on the use of a logistical support contract was not made available to military commanders in Iraq until mid-2006. During the course of our 2006 work, we found no organization within DOD or its components responsible for developing procedures to capture lessons learned on the use of contractor support at deployed locations. Likewise, we found that neither the Joint Force’s Command Joint Center for Operational Analysis nor the Army’s Center for Army Lessons Learned was actively collecting lessons learned on the use of contractor support in Iraq. We noted that when lessons learned are not collected and shared, DOD and its components run the risk of repeating past mistakes and being unable to build on the efficiencies and effectiveness others have developed during past operations that involved contractor support. We also found a failure to share best practices and lessons learned between units as one redeploys and the other deploys to replace it. As a result, new units essentially start at ground zero, having to resolve a number of difficulties until they understand contractor roles and responsibilities. DOD does not routinely incorporate information about contractor support for deployed forces in its pre-deployment training of military personnel, despite the long-standing recognition of the need to provide such information. We have discussed the need for better pre-deployment training of military commanders and contract oversight personnel since the mid-1990s and have made several recommendations aimed at improving such training as shown in figure 1. Moreover, according to DOD policy, personnel should receive timely and effective training to ensure they have the knowledge and other tools necessary to accomplish their missions. Nevertheless, we continue to find little evidence that improvements have been made in terms of how DOD and its components train military commanders and contract oversight personnel on the use of contractors to support deployed forces prior to their deployment. For example, in an October 2007 report to Congress on managing contractor support to deployed forces, DOD discussed broad, contractor management-related training programs that it intends to implement in the future. Without properly trained personnel, DOD will continue to face risks of fraud, waste, and abuse. Limited or no pre-deployment training on the use of contractor support can cause a variety of problems for military commanders in a deployed location. As we reported in 2006, with limited or no pre-deployment training on the extent of contractor support to deployed forces, military commanders may not be able to adequately plan for the use of those contractors. In its 2007 report, the Commission on Army Acquisition and Program Management in Expeditionary Operations found that combatant commands do not recognize the significance of contracts and contractors in expeditionary operations, and concluded that the Army needs to educate and train commanders on the important operational role of contracting. Several military commanders we met with in 2006 said their pre-deployment training did not provide them with sufficient information regarding the extent of contractor support that they would be relying on in Iraq. These commanders were therefore surprised by the substantial number of personnel they had to allocate to perform missions such as on- base escorts for third-country and host-country nationals, convoy security, and other force protection support to contractors. In addition, limited or no pre-deployment training for military commanders on the use of contractor support to deployed forces can result in confusion regarding their roles and responsibilities in managing and overseeing contractors. For example, we found some instances where a lack of training raised concerns over the potential for military commanders to direct contractors to perform work outside the scope of the contract, something commanders lack the authority to do. As Army guidance makes clear, when military commanders try to direct contractors to perform activities outside the scope of the contract, this can cause the government to incur additional charges because modifications would need to be made to the contract and, in some cases, the direction may potentially result in a violation of competition requirements. In addition, our 2005 report on the use of private security contractors in Iraq noted that commanders told us they received no training or guidance on how to work with private security providers in Iraq. To highlight the lack of training and guidance, representatives from one unit told us that they did not know there were private security providers in their battle space until the providers began calling for assistance. They also said that any information about who would be in the battle space and the support the military should be providing would be useful. We also found that contract oversight personnel such as contracting officer’s representatives received little or no pre-deployment training regarding their roles and responsibilities in monitoring contractor performance. Many of the contracting officer’s representatives we spoke with in 2003 and 2006 said that training before they assumed these positions would have better prepared them to effectively oversee contractor performance. Although DOD has created an online training course for contracting officer’s representatives, individuals we spoke with noted that it was difficult to set aside the time necessary to complete the training once they arrived in Iraq. Furthermore, in most cases, deploying individuals were not informed that they would be performing contracting officer representative duties until after they had deployed. We found several instances where the failure to identify and train contracting officer’s representatives prior to their deployment hindered the ability of those individuals to effectively manage and oversee contractors. For example, the contracting officer’s representative for an intelligence support contract in Iraq had not been informed of his responsibilities prior to deploying and had no previous experience working with contractors. The official told us he found little value in the online training course and subsequently did not believe this training adequately prepared him to execute his contract oversight responsibilities, such as reviewing invoices submitted by the contractor. Similarly, officials from a corps support group in Iraq told us that until they were able to get a properly trained contracting officer’s representative in place, they experienced numerous problems regarding the quality of food service provided by LOGCAP. The 2007 report of the Commission on Army Acquisition and Program Management in Expeditionary Operations also discussed the need to train contracting officer’s representatives and warned that the lack of training could lead to fraud, waste, and abuse. Some steps have been taken to help address the issue of pre-deployment training of military commanders and contract oversight personnel. In DOD’s response to our 2006 report, the Director of Defense Procurement and Acquisition Policy stated that the Army is making changes to its logistics training programs to be better positioned to meet current and future challenges. This included incorporating contracting officer’s representatives training into its basic and advanced training for its ordnance, transportation, and quartermaster corps. In addition, the Defense Acquisition University has updated its contingency contracting course to include a lesson on contractors accompanying the force. More recently, the National Defense Authorization bill for fiscal year 2008 included a provision addressing the need for contingency contractor training for personnel outside the acquisition workforce. This provision requires that military personnel receive training sufficient to ensure that they understand the scope and scale of contractor support they will experience in contingency operations and are prepared for their roles and responsibilities regarding contractor oversight and program management among others. DOD’s problems managing and overseeing contractors at deployed locations make it difficult for the department to be assured that it is getting the services it needs on time and at a fair and reasonable price. Over the past few years, we reported some of the results of these long- standing problems. While many of the situations we discuss below highlight monetary consequences, poor contract management and oversight can affect military operations as well. Furthermore, although determining the extent of the financial impact is not always feasible or practicable, the inability to quantify the financial impact should not detract from efforts to achieve greater rigor and accountability in DOD contracting practices. The following are examples of negative impacts that have occurred at deployed locations. On January 23, 2008, we issued a report on the Army’s equipment maintenance contract in Kuwait and concluded that the Army did not always follow key principles included in the Army Quality Program. This instruction specifies the use of performance information to perform root- cause analysis and foster continuous improvement. In addition, the battalion’s July 2006 draft maintenance management plan requires that contractor performance data should be analyzed to help identify the cause of new and/or recurring quality problems and evaluate the contractor’s performance. However, we found that the Army did not begin to track contractor pass/fail rates until July 2007. According to Army quality assurance officials, this metric was not tracked and monitored because they did not have sufficient quality assurance staff to perform such an analysis. By not tracking and monitoring the percent of equipment submitted for Army acceptance that failed quality assurance inspection, the Army did not know the extent to which the contractor was meeting the specified maintenance standard requirements nor could it identify problem areas in the contractor’s processes and initiate corrective action. Furthermore, our analysis of Army data found that for five types of vehicles inspected by quality assurance personnel between July 2006 and May 2007, 18 percent to 31 percent of the equipment presented to the Army as ready for acceptance failed government inspection. In addition, some equipment presented to the Army as ready for acceptance failed government inspection multiple times, sometimes for the same deficiencies. When the Army inspected equipment that did not meet standards, it was returned to the contractor for continued repair. Our analysis of Army data found that since May 2005 an additional 188,000 hours were worked to repair equipment after the first failed government inspection, which translates into an additional cost of approximately $4.2 million. In July 2004, we reported that the Air Force had used the Air Force Contract Augmentation Program (AFCAP) contract to supply commodities for its heavy construction squadrons because it did not deploy with enough contracting and finance personnel to buy materials quickly or in large quantities. Additionally, the U.S. Agency for International Development has used the contract to provide disaster relief and humanitarian assistance supplies. In some cases, the contractor simply bought the supplies and delivered them to the customer under cost-plus award fee task orders. We noted that the contractor had received more than $2 million in award fees since February 2002 for these commodity supply task orders. While permitted, the use of cost-plus award fee task orders to obtain supplies may not be cost-effective, as the government reimburses the contractor’s costs and pays award fees for orders with little risk. Air Force officials recognized that this business arrangement may not be cost-effective. Under the current Air Force Contract Augmentation Program (AFCAP) contract, commodities may be obtained using only firm fixed price orders or cost-plus fixed fee orders. The lack of sufficiently trained personnel can also lead to the inefficient use of military personnel. In our December 2006 report, officials with a Stryker brigade told us a lack of contractor management training hindered their ability to resolve staffing issues with a contractor conducting background screenings of third-country and host-country nationals. In this case, shortages of contractor-provided screeners forced the brigade to use its own intelligence personnel to conduct screenings. As a result, those personnel were not available for their primary intelligence-gathering responsibilities. In June 2004, we reported that a disagreement between the LOGCAP contractor and the Defense Contract Audit Agency (DCAA) on how to bill for services to feed soldiers in Iraq involved at least $88 million in questioned costs. In this case, the statement of work required the contractor to build, equip, and operate dining facilities at various base camps and provide four meals a day for the base camp populations. The statement of work did not specify, however, whether the government should be billed on the camp populations specified in the statement of work or on the actual head count. This is an important distinction because the specified camp population was significantly higher than the actual head count, and the subcontractors providing the services generally billed the contractor for the specified base camp population. A contractor analysis of selected invoices over a 4-month period found that it had billed the government for food service for more than 15.9 million soldiers when only 12.5 million—more than 3.4 million fewer—had passed through the dining facilities. DCAA believed that the contractor should have billed the government on the actual head count services, whereas the contractor believed that it should have billed the government based on the camp populations specified in the statement of work. A clearer statement of work, coupled with better DOD oversight of the contract, could have prevented the disagreement and mitigated the government’s risk of paying for more services than needed. Looking at our past work, I would like to make a number of broad observations about challenges we believe will need to be addressed by DOD to improve the oversight and management of contractors supporting deployed forces in future operations and ensure warfighters are receiving the support they rely on in an effective and efficient manner. There are four issues in particular that merit attention by DOD: (1) incorporating contractors as part of the total force, (2) determining the proper balance of contractors and military personnel in future contingencies and operations, (3) clarifying how DOD will work with other government agencies in future contingencies and operations, and (4) addressing the use and role of contractors into its plans to expand and transform the Army and the Marine Corps. DOD relies on contractors as part of the total force, which the department defines as its active and reserve military components, its civil servants, and its contractors. As DOD’s 2006 Quadrennial Defense Review noted, “The department and military services must carefully distribute skills among the four elements of the total force (Active Component, Reserve Component, civilians, and contractors) to optimize their contributions across the range of military operations, from peace to war.” Furthermore, in a November 2007 briefing on challenges and opportunities associated with DOD’s transformation efforts, the Comptroller General called on DOD to employ a total force management approach to planning and execution (e.g., military, civilian, and contractors). Similarly, the 2007 report of the Commission on Army Acquisition and Program Management in Expeditionary Operations called on the Army to transform its culture with regard to contracting and establish contracting as a core competency. Many of the long-standing problems we have identified regarding the oversight and management of contractor support to deployed forces stem from DOD’s reluctance to plan for contractors as an integral part of the total force. This is evidenced by the fact that DOD does not incorporate the use and role of contractors in its professional military education. For example, an official from the Army’s Training and Doctrine Command said it was important that all DOD components incorporate into their institutional training information on the use of contractors in deployed location so that all military personnel who deploy have a basic awareness of contractor support issues prior to deploying. We therefore recommended in our 2006 report that DOD develop training standards for the services on the integration of basic familiarity with contractor support into their professional military education. This would be an important first step towards incorporating the use and role of contractors across the department. DOD needs to determine the appropriate balance between contractors and military personnel in deployed locations in order to ensure its ability to meet its future mission requirements while at the same time assuring it has the capacity to oversee and manage contractors supporting those future missions. As the Comptroller General stated in April 2007, given DOD’s heavy and increasing reliance on contractors in Iraq and elsewhere, and the risks this reliance entails, it may be appropriate to ask if DOD has become too reliant on contractors to provide essential services. This is becoming a more important issue, as DOD becomes increasingly involved in missions such as stability operations. Looking towards the future, the department needs to consider how it will use contractors to support those missions and how it will ensure the effective management and oversight of those contractors. What is needed is a comprehensive, forward-looking review of contractor support to deployed forces that provides the proper balance between contractor support and the core capabilities of military forces over the next several years. The National Defense Authorization bill for fiscal year 2008 requires the Secretary of Defense to conduct, every 4 years, a comprehensive assessment of the roles and missions of the armed forces and the core competencies and capabilities of DOD to perform and support such roles and missions. This could provide the foundation for a comprehensive examination of the support DOD will require contractors to provide in future operations and core capabilities the department believes it should not be relying on contractors to perform. Only when DOD has established its future vision for the use and role of contractors supporting deployed forces can it effectively address its long-term capability to oversee and manage those contractors. As DOD works to improve its oversight and management of contractors supporting deployed forces, it is increasingly working with other government agencies at those deployed locations. This has raised a number of issues that will likely continue to affect future operations unless the U.S. government acts to resolve them. For example, the Department of the Defense and the Department of State need to determine who should be responsible for providing security to the U.S. government employees and contractors working in contingency operations. If the U.S. government determines that it will use private security companies during contingency operations, it is imperative that DOD and the other agencies agree on regulations and procedures to govern the use of private security companies and clarify their rules of engagement. Another question that has come up in Iraq and may occur in future operations is which agency should be responsible for reconstruction efforts. Moreover, there are issues that arise from the different rules and regulations governing military personnel, DOD civilians, other government agency employees, and contractors who may all be living and working on the same installation. For example, concerns have been raised about the applicability of the Military Extraterritorial Jurisdiction Act to crimes committed by contractors who support agencies other than DOD at deployed locations. In addition, contractors working for DOD in Iraq and Afghanistan fall under military policies that prohibit the use of alcohol, gambling, and other behaviors. However, contractors working for other agencies are generally not required to follow these policies, which can lead to tensions and erode military efforts to maintain discipline and morale. Given that DOD can expect to work more closely with other agencies in the future, the department will need to develop memoranda of understanding with those agencies and update its guidance to improve its working relationship with its partners across the U.S. government. DOD also needs to address the role and use of contractor support to deployed forces as the department develops its plan to expand and transform its military forces. The department is in the process of planning for a substantial increase in the size of the Army and the Marine Corps. As it develops these plans, it is important that the department address the impact this growth in military forces will have on the contractor services needed to support those forces. Moreover, DOD should recognize that not all of the additional personnel must be dedicated to combat arms; a portion of that increase should be dedicated to expanding and enhancing the department’s professional acquisition corps. In addition, as the Department continues to transform its forces, DOD should ensure that it is addressing contract oversight and management requirements, such as personnel requirements. For example, the 2007 report of the Commission on Army Acquisition and Program Management in Expeditionary Operations recommended that the Army establish an Expeditionary Contracting Command that would be responsible for providing skilled, trained, contracting personnel for the support of expeditionary forces, assigned to deployable or deployed commands. In closing, I believe the long-standing challenges DOD faces transcend the current operations in Iraq and Afghanistan and demand a comprehensive effort to resolve. As requested, we considered specific legislative remedies for the challenges facing DOD. While we believe that DOD bears the primary responsibility for taking actions to address the challenges discussed above, these are three actions Congress may wish to consider requiring DOD to take in order to move the debate forward: Determine the appropriate balance of contractors and military personnel as it shapes the force for the future. A Quadrennial Defense Review-type study of contracting may be in order, one which comprehensively examines the support DOD will require contractors to provide in future operations and the core capabilities the department believes it should not be relying on contractors to perform. In addition, as the department continues to grow and transform its military forces, it should ensure that the role of contractor support to deployed forces is incorporated into its planning efforts. Include the Use and Role of Contractor Support to Deployed Forces in Force Structure and Capabilities Reporting. DOD regularly reports on the readiness status, capabilities assessments, and other review of the status and capabilities of its forces. Given the reality that DOD is dependant on contractors for much of its support in deployed locations, the department should include information on the specific missions contractors will be asked to perform, the operational impacts associated with the use of contractors, and the personnel necessary to effectively oversee and manage those contractors. In addition, these reports should address the risks associated with the potential loss of contractor support. Ensure that operations plans include specific information on the use and roles of contractor support to deployed forces. DOD guidance requires that contractor support be fully integrated into the logistics annex of operations and contingency plans. However, our previous work indicates that this is not being done at a sufficient level. Because of the increased use of contractors to support deployed forces and the variety of missions DOD may be asked to perform, Congress may want to take steps to gain assurances that operations plans for those missions sufficiently consider the use and role of contractors. Mr. Chairman and member of the subcommittee, this concludes my prepared remarks. I would be happy to answer any question you may have. For questions about this statement, please contact Bill Solis at (202) 512- 8365. Other individuals making key contributions to this statement include Carole Coffey, Assistant Director, Sarah Baker, Grace Coleman, and James Reynolds. Defense Logistics: The Army Needs to Implement an Effective Management and Oversight Plan for the Equipment Maintenance Contract in Kuwait. GAO-08-316R. Washington, D.C.: January 23, 2008. Defense Acquisitions: Improved Management and Oversight Needed to Better Control DOD’s Acquisition of Services. GAO-07-832T. Washington, D.C.: May 10, 2007. Military Operations: High-Level DOD Action Needed to Address Long- standing Problems with Management and Oversight of Contractors. GAO-07-145. Washington, D.C.: December 18, 2006. Rebuilding Iraq: Continued Progress Required Overcoming Contract Management Challenges. GAO-06-1130T. Washington, D.C.: September 28, 2006. Military Operations: Background Screenings of Contractor Employees Supporting Deployed Forces May Lack Critical Information, but U.S. Forces Take Steps to Mitigate the Risks Contractors May Pose. GAO-06- 999R. Washington, D.C.: September 22, 2006. Rebuilding Iraq: Actions Still Needed to Improve the Use of Private Security Providers. GAO-06-0865T. Washington, D.C.: June 13, 2006. Rebuilding Iraq: Actions Needed to Improve Use of Private Security Providers. GAO-05-737. Washington, D.C.: July 28, 2005. Interagency Contracting: Problems with DOD’s and Interior’s Orders to Support Military Operations. GAO-05-201. Washington, D.C.: April 29, 2005. Defense Logistics: High-Level DOD Coordination Is Needed to Further Improve the Management of the Army’s LOGCAP Contract. GAO-05-328. Washington, D.C.: March 21, 2005. Contract Management: Opportunities to Improve Surveillance on Department of Defense Service Contracts. GAO-05-274. Washington, D.C.: March 17, 2005. Military Operations: DOD’s Extensive Use of Logistics Support Contracts Requires Strengthened Oversight. GAO-04-854. Washington, D.C.: July 19, 2004. Military Operations: Contractors Provide Vital Services to Deployed Forces but Are not Adequately Addressed in DOD Plans. GAO-03-695. Washington, D.C.: June 24, 2003. Contingency Operations: Army Should Do More to Control Contract Costs in the Balkans. GAO/NSIAD-00-225. Washington, D.C.: September 29, 2000. Contingency Operations: Opportunities to Improve the Logistics Civil Augmentation Program. GAO/NSIAD-97-63. Washington, D.C.: February 11, 1997. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Department of Defense (DOD) relies extensively on contractors to support deployed forces for services that range from food and housing services to intelligence analysis. Since 1997, GAO has reported on DOD's shortcomings in managing and overseeing its use of contractor support. Part of the difficulty attributed to these shortcomings is that no one person or entity that made the decision to send 129,000 contractors to Iraq. Rather, numerous DOD activities were involved, thus adding to the complexity of the problems which GAO identified in its past work on this topic. This testimony focuses on (1) the problems that DOD has faced in managing and overseeing its contractor support to deployed forces and (2) future challenges that DOD will need to address to improve its oversight and management of contractors at deployed locations. In addition, as you requested, we have developed several actions Congress may wish to consider requiring DOD to take. This testimony is based on previously issued GAO reports and testimonies on DOD's management and oversight of contractor support to deployed forces that focused primarily on U.S. efforts in Southwest Asia. This work was conducted in accordance with generally accepted government auditing standards. DOD leadership needs to ensure implementation of and compliance with existing guidance to improve the department's oversight and management of contractors supporting deployed forces. While DOD issued a comprehensive guidance on contractor support to deployed forces in 2005, we found little evidence that DOD components were implementing this and other guidance. As a result, several long-standing problems have hindered DOD's management and oversight of contractors at deployed locations, even in cases where DOD and its components have developed guidance related to these problems. These problems include failure to follow planning guidance, an inadequate number of contract oversight and management personnel, failure to systematically collect and distribute lessons learned, and lack of comprehensive training for contract oversight personnel and military commanders. Our previous work in this area has identified several instances where poor oversight and management of contractors led to negative monetary and operational impacts. Based on our past work, several challenges will need to be addressed by DOD to improve the oversight and management of contractors supporting deployed forces in future operations and ensure warfighters are receiving the support they rely on in an effective and efficient manner. Those challenges include: (1) incorporating contractors as part of the total force, (2) determining the proper balance of contractors and military personnel in future contingencies and operations, (3) clarifying how DOD will work with other government agencies in future contingencies and operations, and (4) addressing the use and role of contractors into its plans to expand and transform the Army and the Marine Corps.
Treasury is authorized to use financial agents under several statutes, including the National Bank Acts of 1863 and 1864. Treasury is authorized to employ financial institutions as financial agents of the government to perform all reasonable duties as may be required of them. Treasury may designate various types of financial institutions as financial agents. Treasury also has issued regulations governing its designation of financial agents. Treasury designates financial institutions as financial agents through financial agency agreements. Financial agency agreements entered into by Treasury do not constitute procurement contracts under the purview of Federal Acquisition Regulations. According to Treasury officials, the department uses financial agents to provide only financial services, and it uses a separate procurement process to acquire commercially available goods and equipment. In 2004, Congress provided Treasury with a permanent, indefinite appropriation to reimburse financial agents, and Treasury uses that appropriation to pay financial agents supporting Fiscal Service’s revenue collections, payments, and other programs. Treasury received additional authority to use financial agents under the Emergency Economic Stabilization Act of 2008 and the Small Business Jobs Act of 2010, which were passed in response to the financial crisis. The Emergency Economic Stabilization Act established the Office of Financial Stability within Treasury and provided Treasury with the authority to purchase and guarantee certain types of troubled assets under the Troubled Asset Relief Program to stabilize the economy and financial system. The Small Business Jobs Act established the Small Business Lending Fund and State Small Business Credit Initiative programs within Treasury to stimulate job growth, among other things. Both acts provide Treasury with the authority to designate financial institutions as financial agents to perform all such reasonable duties related to the acts. These acts also provide Treasury with the authority to designate more types of institutions as financial agents than other general statutes, including, for example, security brokers or dealers. The financial agents designated to support these programs are paid from appropriations provided pursuant to those acts. As shown in figure 1, four units within Treasury’s Office of Domestic Finance use financial agents. Fiscal Service, among other things, provides central payment services to federal program agencies; operates the federal government’s collections and deposit systems; issues, services, and accounts for all Treasury securities; and manages the collection of delinquent debt. According to agency officials, Fiscal Service uses financial agents more extensively than the other Treasury units and has designated a number of banks as financial agents to provide a variety of specialized financial services for its revenue collections, payments, and other programs. The Office of Fiscal Assistant Secretary, according to Treasury officials, manages the programs created under the Housing and Economic Recovery Act of 2008, such as the Agency Mortgage Backed Securities Purchase Program. Treasury has designated financial institutions to provide custodial and asset management services. The Office of Financial Stability manages the Troubled Asset Relief Program created under the Emergency Economic Stabilization Act of 2008. Treasury has designated banks, security brokers or dealers, and other entities as financial agents to support the act’s implementation. The Office of Small Business, Community Development, and Affordable Housing Policy coordinates policy on, among other issues, small business finance and development, housing policy, and community and economic development. The office also oversees the Small Business Lending Fund, created by the Small Business Jobs Act of 2010, for which Treasury has used financial agents for custodial and asset management services. Within Treasury, Fiscal Service (and its predecessors) is responsible for conducting Treasury’s basic functions of collecting and holding federal taxes and other revenue and making federal payments. As shown in table 1, Fiscal Service currently manages 20 programs that use financial agents under 26 financial agency agreements to provide services in four areas: (1) revenue collections, (2) payments, (3) debt collection, and (4) Treasury securities. Its financial agents include some of the largest financial institutions in the country, and some of them serve as financial agents for multiple collections and payments programs. Of the four types of Fiscal Service program areas that use financial agents, revenue collections programs use the largest number of agents. Revenue collections programs use financial agents to collect federal revenue from individuals and businesses, including for taxes, student loan repayments, and customs duties. Payments programs use financial agents to help Fiscal Service disburse payments to individuals and businesses on behalf of federal agencies, such as benefit payments made by the Social Security Administration and the Department of Veterans Affairs and payments to businesses for goods and services provided to the federal government. The debt collection program uses a financial agent to operate a centralized service to assist federal agencies with the management of their accounts receivable. Fiscal Service’s Treasury securities program area manages the issuance and sales of Treasury’s marketable and nonmarketable securities. One Fiscal Service securities program uses a financial agent to provide custodial and related services for the myRA program, which offers retirement savings accounts for individuals without access to an employer-provided retirement savings program and which invests in a U.S. retirement savings security. Congress has used reporting requirements and other mechanisms to oversee Treasury’s use of financial agents. Although the National Bank Act and other statutes authorize Treasury to use financial agents, they do not require Treasury to report to Congress on its use of such agents. However, the Check Clearing for the 21st Century Act of 2003 required Treasury to submit (1) a report annually to Congress on its use of compensating balances and appropriations and (2) a final report following the transition from the use of compensating balances to the use of appropriations to pay financial institutions for their services as depositaries and financial agents. For the final report, Treasury was directed to analyze the transition cost, direct costs of the services being paid from the authorized appropriations, and the benefits realized from the use of direct payment for such services rather than the use of compensating balances. Treasury sent the final report to Congress in 2004 and thereafter has reported annually the amount of permanent, indefinite appropriations used to pay financial agents each fiscal year in its President’s budget submission. Unlike Treasury’s other authorities, under the Emergency Economic Stabilization Act and Small Business Jobs Act, Congress imposed reporting requirements on Treasury for, among other things, compensation paid for its use of financial agents in the programs created under those acts, and it imposed audit or related mandates on GAO and others. Under the Emergency Economic Stabilization Act, Treasury is required to report to Congress every 30 days on, among other things, a detailed financial statement on the exercise of its authority under the act, including all agreements made or renewed and its operating expenses, including compensation paid to financial agents. The act also includes a provision for GAO to conduct oversight and report on its oversight of the Troubled Asset Relief Program’s activities and performance, including agents and representatives, every 60 days. On one of the reports in response to that mandate, we assessed Treasury’s approaches to acquiring financial agent and other services in support of the program. In addition, the act established the Congressional Oversight Panel to review the state of the financial markets and regulatory system and submit various reports to Congress. The Congressional Oversight Panel investigated and reported on Treasury’s use of contractors and financial agents in the Troubled Asset Relief Program. Under the Small Business Jobs Act, Treasury is required to report to Congress semiannually on, among other things, all operating expenses, including compensation for financial agents, and all transactions made by the Small Business Lending Fund. That act also included a provision for GAO and the Treasury Inspector General to audit the Small Business Lending Fund program at least annually and semiannually, respectively. Since the 1980s and continuing today, Treasury has been using financial agents to modernize its systems and keep pace with technological changes in providing financial services to the public. For example, Treasury has used financial agents to reduce the number of paper-based collection and payment transactions by moving them to electronic systems. Since 2008, Treasury also has undertaken several modernization efforts that have affected its use of financial agents. The total amount (outlays) that Treasury has paid Fiscal Service’s financial agents has increased from $378 million in 2005 to $636 million in 2015, partly in response to increased transactions and services. Although Treasury discloses in its annual budget the total amount paid to financial agents, it has not publicly disclosed in a central location information about Fiscal Service’s individual financial agents, including their compensation and services provided. While Treasury historically has used financial agents to physically hold and disburse public money, its use of financial agents began to evolve in the mid-1980s as it sought to reduce the number of paper-based collection and payment transactions by moving them to electronic systems in response to technological advancements, new laws, and other factors. Subsequently, Treasury, through Fiscal Service, has continued to promote electronic transactions for its revenue collections and payments programs, including information systems for tracking those transactions, through various efforts to increase efficiency, reduce fraud, and promote transparency. In 1984, Congress directed Treasury to provide more electronic services for collecting payments. As more states took advantage of technological advances to implement electronic tax collection systems, Treasury began piloting programs modeled on individual states’ programs that used financial agents to collect tax receipts electronically. For example, TAX- LINK was an early pilot program that used three financial agents to explore different concepts for implementing a nationwide electronic tax payment system. TAX-LINK evolved into the Electronic Federal Tax Payment System, which is Treasury’s current program for collecting tax payments from the public electronically. Treasury, through Fiscal Service, uses a financial agent to operate the Electronic Federal Tax Payment System and to provide customer support for taxpayers using the system. As shown in figure 2, the Electronic Federal Tax Payment System expedites the collection process by collecting tax payments electronically rather than by paper check. The Check Clearing for the 21st Century Act of 2003 also allowed the conversion of paper checks into electronic images, called substitute checks, which are the legal equivalent of a paper check. As a result, Treasury developed the Electronic Check Processing program, which uses a financial agent to operate a web-based platform to convert paper check payments into electronic transactions, thereby reducing the amount of time and costs associated with processing paper-based collections. According to Treasury’s Fiscal Year 2015 Agency Financial Report, Fiscal Service collected 98 percent of the total dollar amount of U.S. government receipts electronically in fiscal year 2015. The Debt Collection Improvement Act of 1996 required that all federal payments made after January 1, 1999, be made electronically, subject to exceptions. In response, Treasury developed programs that use financial agents to help disburse payments electronically, particularly for programs related to benefits payments. For example, Treasury developed Electronic Transfer Accounts, which use financial agents to establish low- cost electronic accounts for recipients of federal benefits payments. In an effort to increase electronic payments in areas where Electronic Transfer Accounts were not available, in 2008, Fiscal Service developed the Direct Express program, which uses a financial agent to provide pre-paid debit card access to electronic benefits payments. In 2010, Treasury launched an “all-electronic” initiative, in part to further move federal benefit payments away from paper checks to electronic options. Under the initiative, Treasury required individuals receiving certain federal benefits to receive payments electronically, such as through Direct Express cards. According to Treasury officials, more than 98 percent of federal benefits payments are currently made electronically as a result of Treasury’s expansion of its electronic payments programs, thus improving efficiency and reducing costs and fraud. Fiscal Service is exploring new ways to use modern payment technologies to further reduce the amount of paper-based payments made by the federal government. For example, Fiscal Service is piloting a program that uses a financial agent to provide the settlement mechanism for payment services using mobile banking technologies, such as web- based payment systems. According to Treasury’s Fiscal Year 2015 Agency Financial Report, nearly 95 percent of all Treasury payments were made electronically in fiscal year 2015. Information Systems for Tracking Electronic Transactions As a result of increased electronic transactions, Fiscal Service has developed programs that use financial agents primarily to collect and report information and data about electronic collections and payments transactions. For example, it implemented the Over the Counter Channel Application and the Collections Information Repository, which use financial agents to gather and store information about revenue collection transactions. The Over the Counter Channel Application and the Collections Information Repository do not hold or disburse public money; rather, they use financial agents to process and account for information on the collection of public money. For example, the Over the Counter Channel Application primarily collects data from the electronic processing of checks and provides a web-based application for federal agencies to access information on these transactions. The Collections Information Repository provides a web-based means of tracking, reconciling, and storing revenue collections transactions. In response to a Presidential memorandum in 2009 on data transparency, Fiscal Service made data about revenue collections more accessible to federal agencies through the Collections Information Repository. Treasury has undertaken various efforts to modernize or streamline its collections, payments, and other programs to help increase efficiency and transparency and reduce costs. Although Treasury’s modernization efforts primarily focused on how it delivered services through its programs and not necessarily on its use of financial agents, two of the modernization efforts involved revenue and debt collection programs that used financial agents. In 2008, Treasury initiated its Collections and Cash Management Modernization effort that was aimed at simplifying and modernizing its collections and cash management programs and reducing redundancy. Within Treasury, Fiscal Service used 8 financial agents to help support its collection programs in 2010 and reduced the number to 7 financial agents by year-end 2015. According to Treasury, the effort was designed to reduce the duplication of data, applications, and interfaces, promoting a more efficient use of resources. In 2012, Treasury developed the Centralized Receivables Service to centralize and improve the efficiency of federal agencies’ collections of account receivables. To develop the service, Fiscal Service worked jointly with the Office of Financial Innovation and Transformation, which was created in 2010 to identify and implement innovative solutions to help government agencies become more efficient and transparent in federal financial management. Before the development of the service, many agencies operated their own account receivables programs, which Treasury noted were fragmented and inefficient. The Centralized Receivables Service uses a financial agent to centralize receivables collections services across agencies. According to Treasury, the service has increased the collection of receivables and reduced agency costs. Since Treasury received the permanent, indefinite appropriation to reimburse financial agents, the total amount (outlays) that Treasury has paid Fiscal Service’s financial agents has increased steadily from approximately $378 million in fiscal year 2005 to approximately $636 million in fiscal year 2015 (see fig. 3). As discussed previously, Treasury paid its financial agents through compensating balances—non-interest- bearing cash balances—before it received a permanent, indefinite appropriation. Prior to receiving the appropriation, Treasury did not report the amount of such compensation in its annual budget submissions. Treasury officials told us that they did not have data on the compensation paid to financial agents before April 2004 and could not determine the amount that the financial agents were paid through those compensating balances. Treasury did not create any new programs in fiscal year 2004 that used financial agents, and according to Treasury officials, the compensation to financial agents would have been similar for fiscal years 2003 and 2004. The increase in the amount of total compensation to financial agents between fiscal years 2004 and 2015 was driven partly by increases in transaction volumes and an expansion in the scope of certain financial agent services. For example, the Card Acquiring Service, the largest revenue collections program in terms of cost, uses a financial agent to process debit and credit card payment transactions at federal agencies. The financial agent’s compensation is based largely on the number of transactions it processes, and the increase in card transactions by the public has led to an increase in its compensation. According to Fiscal Service officials, the financial agent processed over 65 million transactions in fiscal year 2007 and over 133 million transactions in fiscal year 2015. Treasury compensated the financial agent $101 million in fiscal year 2007 and $172 million in fiscal year 2015. As another example, a financial agent operates a specialty lockbox program to process passport applications and fees. According to Treasury, the costs for the passport lockbox program increased steadily after the passage of the Intelligence Reform and Terrorism Prevention Act of 2004, which required passports or other accepted documents for travel into and out of the United States from Canada, Mexico, and the Caribbean. Treasury reported that its financial agent has hired hundreds of new employees and invested in infrastructure to handle the increased application volume, which grew from 10.8 million applications in fiscal year 2006 to 12.4 million applications in fiscal year 2015. In fiscal year 2015, the compensation to the financial agent for the passport lockbox program was $62 million, 10 percent of all compensation paid to financial agents. As shown in figure 4, revenue collections programs, which include the Electronic Federal Tax Payment System, the Card Acquiring Service, and various lockbox programs, among others, accounted for $583 million (92 percent) of all financial agent compensation in fiscal year 2015. Compensation for payments programs, $37 million, accounted for 6 percent of total financial agent compensation in fiscal year 2015. Although Treasury publicly discloses the total amount of compensation paid to Fiscal Service’s financial agents in its annual budget submissions, it does not provide more detailed information about these financial agents in a central location, such as on its website. For example, Treasury does not fully disclose in a central location the number of Fiscal Service’s active financial agency agreements, the types of services provided to Fiscal Service under the agreements, and the amount of compensation paid to each financial agent for its services. Treasury officials told us that it is not required to and has not determined the need to publicly disclose Fiscal Service’s financial agency agreements on its website. In contrast, Treasury’s Office of Financial Stability has provided on its public website copies of the 27 financial agency agreements that it entered into to manage the Troubled Asset Relief Program and the amount obligated to compensate each agent. According to Treasury officials, the Office of Financial Stability made its financial agency agreements available to the public based on a policy decision to promote the Troubled Asset Relief Program’s transparency. According to the Office of Management and Budget’s directive on open government, transparency promotes accountability by providing the public with information about government activities. Because Treasury does not fully disclose in a central location information about Fiscal Service’s use of financial agents, including the types of services provided and compensation paid under each agreement, the public and Congress may not know how much Treasury is spending to obtain services from financial agents or what those services are and, thus, may be less able to hold Treasury accountable for such spending. In addition, by improving how it publicly discloses information about its use of financial agents, Treasury would allow the public and Congress to better understand, assess, and appreciate the scope and value of federal investments. Fiscal Service has established a process, which includes internal controls, for selecting and designating its financial agents. While Fiscal Service did not fully document compliance with its process, including controls, for financial agents designated between 2010 and 2015, it adopted new procedures in November 2015 to provide greater assurance that its documentation will be complete. According to Fiscal Service officials, the decision of whether to perform a program in-house or through financial agents does not often arise because Fiscal Service does not frequently create new programs that use financial agents. Many factors influence the agency’s decision on whether to use a financial agent, including statutory authority, costs, the availability and expertise of Treasury staff versus other providers, and the nature and complexity of the services. The decision to use a financial agent for a new program or to renew or amend an existing financial agency agreement is made formally by the assistant commissioner responsible for the particular program, with approval by the Fiscal Service commissioner. Moreover, Fiscal Service’s Office of Chief Counsel typically is involved in all phases of the process, including in advising on whether a financial agent may be used for a particular project. Fiscal Service has developed a financial agent selection process (FASP) that it uses internally to guide its selection and designation of financial agents. It has documented the process in its FASP guidance, a 2010 version of which was updated in November 2015 but, according to Fiscal Service officials, has existed in written form since 2005. The guidance divides the process into four phases: (1) initiation of the FASP, (2) publication of a financial agent solicitation, (3) selection of the best proposal submitted by a financial institution, and (4) designation of the financial institution as a financial agent. In addition to documenting the steps in the process, the 2015 FASP guidance incorporates internal controls that generally are applicable to Fiscal Service’s program offices or selection teams in selecting and designating financial agents. The FASP process and related controls help provide reasonable assurance that the selection and designation process is effective and efficient, documents important information, and complies with applicable laws and regulations. The initiation phase includes all of the steps that Fiscal Service’s program offices must complete before drafting and publicizing a financial agent solicitation. The first steps include obtaining approvals to use a financial agent. Such steps and related internal controls include Fiscal Service’s program offices taking the following actions: consulting with the Office of Chief Counsel as to whether designating an agent is acceptable for the particular project, obtaining approval from the appropriate assistant commissioner to designate a financial institution to provide the services, and creating appropriate governance documentation, including a business case or alternatives analysis, to justify the need for a particular service, which is reviewed by the Investment Review Board for a new program, or the assistant commissioner for an existing program selecting a new financial agent. In addition, the FASP guidance highlights the need for program offices to consider as early as possible the portability of the financial agent services—that is, the ability to transfer services from one agent to another with minimum difficulty. According to the guidance, portability helps to ensure that a program can continue without interruption if services need to be transferred to another agent and promotes competitive pricing and high-quality service. The next steps and related controls focus on planning and include Fiscal Service’s program offices taking the following actions: developing and documenting a FASP high-level strategy that outlines the services needed and process for obtaining them, such as a solicitation open to all or a limited number of financial institutions; forming a selection team that consists of representatives, as needed, from various areas; working with the Office of Chief Counsel to draft a financial agency agreement using the model agreement as a starting point; drafting and updating, as needed, a FASP project plan, which is a schedule of activities, action items, and expected time frames for completion; and specifying the criteria that will be used to evaluate and select financial agents. The FASP guidance also discusses two other internal controls in this phase. First, employees involved in selecting or designating the financial agent should complete ethics training before their involvement in the FASP. Second, program offices are to prepare, assemble, and maintain throughout the process an administrative record comprised of documents that describe and support the decisions made in each phase. The solicitation phase generally involves the selection team, in collaboration with the Office of Chief Counsel, writing the financial agent solicitation; publishing the solicitation to notify eligible financial institutions about the FASP; and holding information sessions with eligible financial institutions, if needed. Internal controls discussed in the guidance include that (1) the selection team should have the solicitation’s content approved by an assistant commissioner before it is distributed and (2) the solicitation should, among other things, state that interested financial institutions must submit a proposal to be considered and, by submitting a proposal, are agreeing to the FASP approach under which the selection will be conducted. The FASP guidance notes that a financial institution should describe in its proposal its ability to perform the work, which may include its experience in providing the same or similar services, ability to meet security requirements, personnel and infrastructure capabilities, and private sector and government references. The selection phase spans the receipt of proposals from financial institutions to the selection (but not designation) of the financial institution as a financial agent. According to the 2015 FASP guidance, employees involved in selecting or designating the financial agent should sign a conflict-of-interest statement before evaluating proposals. Other key steps and related controls during this phase include the selection team taking the following actions: having its members independently rate proposals of financial holding individual information sessions with financial institutions determined to be the best able to meet the needs identified in the solicitation and requiring them to sign an acknowledgment form indicating that they, if selected, will accept the terms of the financial agency agreement, subject to negotiation of services and other terms; using the selection criteria and scoring methodology previously created to determine which financial institutions are least qualified to perform the required services; notifying financial institutions that were least qualified to perform the required services that they were not selected; asking the remaining financial institutions to produce a “best and final” offer and evaluating them against the selection criteria; and negotiating with the financial institution that submitted the best overall offer to obtain the best possible level of service, price, or quality that is required. Following its selection of the financial institution, the selection team must prepare a recommendation memorandum explaining the reasons for recommending the financial agent and a selection decision memorandum, which the assistant commissioner signs to indicate his or her approval of the final selection. According to the 2015 FASP guidance, except in an exigency, no designation of a financial agent should be made without being preceded or accompanied by a recommendation memorandum and selection decision memorandum. Fiscal Service officials said that before approving the selection, the assistant commissioner should obtain the approval of the deputy commissioner, and the approval of the commissioner on a case-by-case basis. The designation phase involves designating the selected financial institution as a financial agent and closing out the process. The financial agency agreement is used to designate a financial institution as a financial agent, and the agreement is signed by authorized representatives of the financial institution and Fiscal Service. The 2015 FASP guidance directs the program office responsible for designating the financial agent to provide Fiscal Service’s Bank Policy and Oversight (BPO) Division with an electronic copy of its administrative record. In turn, the guidance directs BPO to use a checklist to provide assurance that the necessary documents for the administrative record have been created and delivered. Unlike the 2010 FASP guidance, the 2015 FASP guidance includes a two-part addendum that provides guidance on financial agent compensation. Part one seeks to establish consistent compensation policies across Fiscal Service’s financial-agent-related business lines. It discusses different pricing methodologies that can be used to compensate financial agents and instructs that the selected methodology should be based on (1) the financial agent’s ability to minimize the government’s costs under normal and changing conditions but provide the highest possible quality of service and (2) the degree to which the prices of the financial agent services can be compared to the prices of similar or identical financial industry services as a way of gauging cost containment. Part two seeks to reduce the need for specialized compensation policy negotiations by delineating Fiscal Service’s compensation policies. In brief, it generally specifies the conditions under which Fiscal Service will compensate a financial agent for severance pay, retention pay, overhead, leased real property, owned real property, and equipment. All Treasury employees, including Fiscal Service employees, are subject to the same conflict-of-interest requirements that apply to all executive branch employees. For example, employees meeting certain criteria must file financial disclosures, which are reviewed internally by attorneys, and take annual ethics training. In addition, Fiscal Service has an employee conduct policy, which addresses outside activities, gifts, and other topics relevant to conflicts of interest. As discussed previously, the 2015 FASP guidance requires employees involved in selecting or designating a financial agent to complete ethics training before their involvement in a FASP and sign a conflict-of-interest statement before evaluating financial agent proposals. According to Fiscal Service officials, Fiscal Service has no specific conflict-of-interest rules that apply to financial agents that provide services for Fiscal Service programs. However, financial agency agreements generally state that financial agents owe a fiduciary duty of loyalty and fair dealing to the United States, and require them to certify annually that they are not delinquent on any federal tax obligation or other debt owed to the United States. Fiscal Service officials also told us Fiscal Service takes steps to identify and mitigate potential conflicts of interest in drafting the financial agency agreement. For example, Fiscal Service did not want the myRA program’s financial agent using myRA data to sell or cross-market its own financial products to myRA account holders. To that end, the agreement specifies that the agent may use any confidential information received in connection with the agreement for the purposes of fulfilling its duties under the agreement and not for its own commercial purposes or those of a third party. In contrast, as required by the Emergency Economic Stabilization Act of 2008, Treasury issued regulations to address and manage actual and potential conflicts of interest that could arise under the act, including from financial agency agreements. The regulations require, among other things, prospective financial agents to provide Treasury with sufficient information to evaluate any organizational conflicts of interest and plans to mitigate them. For example, an existing or potential financial agent under the Troubled Asset Relief Program that provides advice or asset management services to clients that own certain assets under the program would be required to disclose that fact. Fiscal Service generally does not face such conflicts of interest because it uses agents primarily to provide payment and collection services rather than services related to the acquisition, valuation, disposition, or management of assets. Financial agency agreements generally state that the agent, once designated as a financial agent, owes Treasury a fiduciary duty of loyalty and fair dealing when acting as a financial agent of the United States and agrees to act at all times in the best interests of the United States when carrying out its responsibilities under the agreement. Fiscal Service officials said that if a financial agent faced a conflict of interest under its agreement, the agent would have a duty to disclose and address that conflict. Based on a recommendation recently made by the Treasury Inspector General, Fiscal Service amended its model financial agency agreement to include a provision requiring the financial agent to notify the Inspector General if it becomes aware of any possible violation of federal criminal law regarding fraud, conflict of interest, bribery, or illegal gratuities affecting services performed under the financial agency agreement. Between 2010 and 2015, Fiscal Service created three new programs (Centralized Receivables Service, myRA, and the Non-Traditional Alternative Payments Service) and selected a financial agent for each, according to Treasury officials. For the Centralized Receivables Service, a pilot program that federal agencies use to manage accounts receivable, officials told us that they evaluated providing the service in-house but instead used a financial agent to take advantage of the expertise of commercial banks in receivables processing and collection and to start the program as quickly as possible. Similarly, Treasury officials said they decided to use a financial agent for myRA, a retirement savings program, because Fiscal Service (1) had not been qualified to act as a Roth IRA custodian under IRS rules, (2) had not yet established the necessary infrastructure to operate a Roth IRA program, and (3) could implement the program more quickly by using a financial agent. For the Non- Traditional Alternative Payment Services, which offers recipients alternative ways to receive federal payments, Fiscal Service officials said that they needed a financial agent to maintain a settlement account and process payments. Fiscal Service also selected financial agents to provide traditional banking services for several existing programs, including the Stored Value Card Funds Pool and the Navy Cash Open-Loop Program. For the Stored Value Card Funds Pool and the Navy Cash Open-Loop Program, which provide electronic payment alternatives to cash, Fiscal Service officials said that they needed financial agents to maintain settlement accounts and, in the case of Navy Cash, issue prepaid cards and process transactions for existing transactions. As previously discussed, the FASP guidance requires, as an internal control, Fiscal Service’s program offices to prepare and maintain an administrative record—a compilation of documents generated during a FASP that describes and supports the decision making. According to Fiscal Service officials, the administrative record’s purpose is to provide Treasury with a basis of defense in the event of litigation, to memorialize the decisions made during the FASP, and to document Fiscal Service’s compliance with the FASP guidance, including key controls. We requested copies of the administrative records for the five financial agents selected between 2010 and 2015, and Fiscal Service provided us with copies of four of the records. Fiscal Service officials said that an administrative record may have existed for the agent designated in 2010 for the Stored Valued Card Funds Pool, but they could not locate it. For the four records we received, we reviewed each administrative record to assess the extent to which it (1) contained the documents listed in the 2010 FASP guidance and, in turn, (2) documented compliance with Fiscal Service’s internal controls set forth in the 2010 FASP guidance. We used the 2010 FASP guidance as our criteria because that was the guidance in effect at the time. The most recent FASP guidance was not issued until November 2015 and, thus, was not in effect at that time. The 2010 FASP guidance lists 11 types of documents normally included in every administrative record. Based on our review of the four administrative records and in light of the missing administrative record, we found that the completeness of the records varied. None contained all of the documents listed in the 2010 FASP guidance, but three contained the majority. For example, the record for myRA, a new retirement savings program using a financial agent to provide custodial services, contained 6 of 11 key documents—missing, for example, certain planning and approval documents. As a result, the documents comprising the administrative records varied in the extent to which they complied with Fiscal Service’s internal controls set forth in the 2010 FASP guidance. More specifically, we found the following in our review of the administrative records (excluding the missing administrative record). Initiation Phase. Two of the four administrative records included a FASP plan that outlined the services needed and process for obtaining those services, but the other two did not. One of the four administrative records included documentation of the assistant commissioner’s approval to designate a financial institution as a financial agent, but the other three did not. Solicitation Phase: Three of the four administrative records included the solicitation announcing the FASP, but one did not. However, the one missing the solicitation covered a financial institution that was directly designated as a financial agent. According to the FASP guidance, a solicitation is not required under a direct designation. The three administrative records with solicitations also included documentation of the proposals submitted by the financial institutions and other correspondence between Fiscal Service and the financial institutions. Finally, the three records included the criteria that Fiscal Service planned to use to evaluate and select the financial institutions as financial agents. Selection Phase: None of the four administrative records included acknowledgment forms signed by the financial institutions indicating that they would, if selected, accept the terms of the financial agency agreement. Three of the four records contained (1) Fiscal Service’s analyses of the financial institutions’ proposals based on the selection criteria and (2) the selection decision memorandums that were signed by an assistant commissioner. The other record did not contain such documentation. Finally, two of the four records included documentation of meetings between Fiscal Service and the financial institutions, but the other two did not. Designation Phase: All four of the administrative records included the financial agency agreements signed by Fiscal Service and the financial institutions. However, one included an amended agreement and not the original agreement. The missing administrative record and incompleteness of the other records highlight the lack of compliance with internal controls, which provide reasonable assurance that the agency achieves its objectives, and could undermine Treasury’s ability to defend itself against litigation. According to Fiscal Service officials, any legal protest likely would arise soon after a financial agent decision was made, so they could collect any needed documents from the program office. Importantly, no assurances exist that program offices will be able to produce any missing documents. For example, consistent with our findings, a report issued by the Treasury Inspector General in 2015 disclosed instances where Fiscal Service was unable to produce requested documents concerning its use of financial agents. In response to the finding, the Inspector General recommended that Fiscal Service ensure that the selection process for financial agents is documented and that the documentation is maintained through the life of the financial agency agreement. Fiscal Service agreed with the recommendation and noted that it was revising its FASP guidance and expected to complete the revisions by year-end 2015. As discussed earlier, Fiscal Service issued its revised FASP guidance in November 2015. Although none of the administrative records that we reviewed were complete and one was missing, Fiscal Service’s revised 2015 FASP guidance includes new procedures designed to address the deficiency. Unlike the 2010 guidance, the 2015 guidance instructs not only Fiscal Service’s program offices to provide BPO with an electronic copy of their administrative records at the end of a FASP, but it also instructs BPO to use a checklist to ensure that the necessary documents have been created and electronically delivered to BPO. BPO developed a checklist of 18 of the 19 types of documents listed in the 2015 FASP guidance as examples of documents to be maintained in the administrative record and incorporated fields to check to verify whether each document was provided. In addition, the checklist includes fields to document the reviewer’s name, date of the administrative record’s review, and comments on the administrative record. According to Fiscal Service officials, BPO trained Fiscal Service’s program offices on the revised 2015 FASP guidance. Moreover, BPO’s training slide presentation included a copy of the checklist and examples of the documents to be maintained in the administrative record. As noted, the 2015 FASP guidance was not in effect for the administrative records that we reviewed. However, by conducting its checklist review in future FASPs, BPO should be able to better ensure that the administrative records are complete. Such actions should provide reasonable assurance that Fiscal Service is complying with its FASP guidance, including key controls to provide reasonable assurance that it achieves its objectives. Treasury has expanded its use of financial agents through its Bureau of the Fiscal Service to modernize its systems and keep pace with technological changes in providing financial services to the public. However, Treasury has not publicly disclosed in a central location information about Fiscal Service’s individual financial agency agreements, such as a description of services provided under each agreement and the amount paid to each agent for its services. Without such information, the public and Congress are less able to hold Treasury accountable for such spending. In addition, by publicly disclosing more information about its use of financial agents, Treasury would allow the public and Congress to better understand, assess, and appreciate the scope and value of federal investments. To promote transparency and accountability of federal spending, the Commissioner of the Fiscal Service should make basic information about Fiscal Service’s use of financial agents publicly available in a central location, including compensation paid to each financial agent under its financial agency agreement and a description of the services provided. We provided a draft of this report to Treasury for review and comment. In its written comments (reproduced in app. V), Treasury concurred with our findings and recommendation regarding transparency and accountability. It said that Fiscal Service will make basic information about its financial agents publicly available, including information about compensation and services rendered. In addition, Treasury provided technical comments on the draft report, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of the Treasury and appropriate congressional committees. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. Our objectives were to examine (1) how the Department of the Treasury’s (Treasury) use and compensation of financial agents has changed as it has modernized its payment and collection systems and (2) the Bureau of the Fiscal Service’s (Fiscal Service) process and related internal controls for selecting and designating financial institutions as financial agents. To examine how Treasury’s use and compensation of financial agents has changed as it has modernized its payment and collection systems, we reviewed federal statutes, regulations, and directives that have guided Treasury’s use of financial agents; Treasury’s annual budget documents; documentation on current and former Treasury programs using financial agents, including compensation data and descriptions of services provided by financial agents; financial agency agreements and amendments to those agreements; audit or similar reports issued by GAO, Treasury’s Office of the Inspector General, or others; and congressional testimony from a Treasury official. We used Treasury’s budget data for fiscal years 2004 through 2015, the most recent data available at the time of our review, to analyze the total amount paid to financial agents since enactment of the permanent, indefinite appropriation. We also obtained compensation data from Fiscal Service on the amount it compensated each of its financial agents in fiscal years 2014 and 2015 to conduct a more in-depth analysis of the total amount of compensation for collection, payment, and related services. We assessed the reliability of the data by interviewing knowledgeable officials, conducting manual testing on relevant data fields for obvious errors, and reviewing a recent audit. Based on these steps, we found the data to be sufficiently reliable for the purposes of our analyses. Finally, we interviewed officials in various units within Treasury involved in the selection and designation of financial agents, including Fiscal Service and the Office of Financial Stability. To examine Fiscal Service’s process and related internal controls for selecting and designating financial institutions as financial agents, we reviewed federal statutes and regulations authorizing or governing Treasury’s use of financial agents; Fiscal Service’s policies and procedures and related documentation for selecting and designating financial agents, including financial agency agreements, financial agent solicitations, and selection decision memoranda; and audit or similar reports issued by GAO, Treasury’s Office of the Inspector General, or others. We assessed Fiscal Service’s 2010 and 2015 financial agent selection process (FASP) guidance, which documents its process and related internal controls for selecting and designating financial agents against the standards for internal control in the federal government. In addition, we reviewed internal records that Fiscal Service officials generated to document key decisions made in their selection and designation of five financial agents between January 2010 and December 2015 to assess compliance with Fiscal Service’s policies and procedures. We compared those records to the types of documentation listed in Fiscal Service’s 2010 FASP guidance, which was in effect for the five FASPs we reviewed, to assess Fiscal Service’s compliance with its FASP guidance, including key controls. We interviewed officials in various units within Treasury involved in the selection and designation of financial agents, including Fiscal Service and the Office of the Fiscal Assistant Secretary. We conducted this performance audit from January 2016 to January 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Between 2008 and 2010, Congress passed several laws that established or led to the establishment of a number of programs designed to promote U.S. financial stability and address other effects of the financial crisis. The Department of the Treasury (Treasury) has designated financial institutions as financial agents to provide services under the programs. The Housing and Economic Recovery Act of 2008 (HERA) provided Treasury with authority to purchase obligations and securities issued by Fannie Mae and Freddie Mac, the housing government-sponsored enterprises (GSE). Under its authority, Treasury created the GSE Mortgage-Backed Securities Purchase Program to help support the availability of mortgage credit by temporarily providing additional capital to the mortgage market. By purchasing those securities, Treasury sought to broaden access to mortgage funding for current and prospective homeowners and to promote market stability. Treasury used its existing authorities to designate three financial institutions as financial agents to provide asset management, custodian, and other services for the program, and Treasury has one active financial agency agreement as of October 2016. The Emergency Economic Stabilization Act of 2008 (EESA) established the Office of Financial Stability within Treasury and authorized the Troubled Asset Relief Program, in part to restore liquidity and stability to the U.S. financial system. Among other things, EESA authorized Treasury to buy up to $700 billion (later reduced to $475 billion) in “troubled assets” as defined under the act and to designate financial institutions as financial agents to perform all such reasonable duties related to the act. Treasury entered into 27 financial agency agreements with 23 financial institutions, including banks, security brokers or dealers, and insurance companies, as financial agents to support the act’s implementation, and Treasury has four active financial agency agreements as of October 2016. The Troubled Asset Relief Program, in conjunction with other federal actions, was designed to help restore stability to the financial system, including by providing capital to financial institutions and helping homeowners prevent avoidable foreclosures. The Small Business Jobs Act of 2010 (SBJA), among other things, established the Small Business Lending Fund to provide capital to eligible institutions in order to increase the availability of credit for small businesses. The Small Business Lending Fund’s purpose is to address the ongoing effects of the financial crisis on small businesses by providing temporary authority to Treasury to make capital investments in eligible institutions in order to increase the availability of credit for small businesses. As authorized by SBJA, as of October 2016 Treasury has active financial agency agreements with two financial institutions that it designated as financial agents to provide asset management and custodian services. From fiscal year 2009 through fiscal year 2015, Treasury paid financial agents for their services under the HERA, EESA, and SBJA programs a total of $1.3 billion. As shown in figure 5, financial agents under the EESA programs account for the large majority of the total compensation paid to these financial agents. Financial agents under the HERA programs are paid with Treasury’s permanent, indefinite appropriation, but financial agents under the EESA and SBJA programs are paid from appropriations provided pursuant to those acts. The Department of the Treasury’s Bureau of the Fiscal Service has four program offices that use financial agents: (1) Revenue Collections Management, (2) Payment Management, (3) Debt Management Services, and (4) Treasury Securities Services. Tables 2 through 5 below show the active programs managed by these program offices that use financial agents, a description of the program, the financial agent, and the effective date of the current financial agency agreement. myRA® (my retirement account) is a Roth Individual Retirement Account (IRA) that invests in a new U.S. Treasury retirement savings bond. It is designed to facilitate retirement savings for individuals without access to an employer-provided retirement savings program. In January 2014, the President issued a memorandum directing the Secretary of the Department of the Treasury (Treasury) to develop a new retirement savings security focused on reaching new and small-dollar savers. In response, Treasury developed myRA and launched the program nationally in November 2015. Treasury’s Bureau of the Fiscal Service (Fiscal Service) developed the myRA program and used its authority to designate a financial agent to administer customer investments in and serve as the custodian for myRAs. Treasury officials said that they decided to use a financial agent for myRA because Fiscal Service (1) had not been qualified to act as a Roth IRA custodian under IRS rules, (2) had not yet established the necessary infrastructure to operate a Roth IRA program, and (3) could implement the program more quickly by using a financial agent. Although Fiscal Service uses Federal Reserve banks as fiscal agents to serve as custodians for its other savings bond programs, Treasury officials said that such banks cannot serve as custodians for Roth IRAs. According to Treasury officials, Fiscal Service attorneys analyzed the statutory authority for issuing savings bonds under the myRA program and historical precedent for using a financial agent to help carry out the myRA program. Treasury officials stated that Fiscal Service found examples of programs similar to myRA in Treasury’s annual reports. Treasury officials told us this is not the first time that Treasury has used a fiscal or financial agent to hold securities or maintain accounts for others. For example, Fiscal Service uses banks as financial agents in payment programs to allow individuals to receive payments electronically in the form of prepaid debit cards. It also uses Federal Reserve banks, as fiscal agents, to maintain book entry accounts for savings bonds and marketable securities and hold collateral pledged in lieu of surety bonds. The financial agent for myRA holds a Treasury retirement savings bond on behalf of each individual accountholder. Fiscal Service Generally Followed Its Financial Agent Selection Process for the myRA Financial Agent but Did Not Fully Document Its Process Fiscal Service generally followed its 2010 financial agent selection process (FASP) guidance in selecting and designating Comerica Bank as financial agent for the myRA program. The guidance documents the FASP steps, including related internal controls, in initiating the process, soliciting proposals and evaluating submissions, and selecting and designating a financial agent. The following is a summary of Fiscal Service’s selection and designation process for the myRA program based on the administrative record provided by Treasury. Fiscal Service formed a selection team to review the applications and recommend which applicant to designate as the financial agent. The team consisted of six employees chosen to bring a breadth of expertise to the selection process. Fiscal Service developed a cost estimate for the services to be provided by a financial agent under the myRA program. In February 2014, Fiscal Service notified approximately 10,000 financial institutions about its financial agent solicitation through announcements distributed through the Federal Reserve’s bank communication system and American Banker, a news periodical on banking and finance. By the close of the initial application period in March 2014, Fiscal Service had received two applications, both from entities that were not eligible to serve as a financial agent because they were not financial institutions as defined by the laws governing Treasury’s use of financial agents. It extended the application period and received an application from Comerica Bank and a resubmitted application from an entity previously determined not to be eligible. Fiscal Service initially reviewed Comerica’s application against the criteria provided in the solicitation and held a conference call with Comerica in May 2014 to further discuss Comerica’s application. Fiscal Service held a follow-up meeting with Comerica, which subsequently provided Fiscal Service with proposed pricing information. Each member of the selection team individually rated Comerica’s application using the program requirements set forth in the solicitation. Fiscal Service requested and reviewed references for a firm that was partnering with Comerica. Fiscal Service compared its cost estimate to Comerica’s cost estimate and found the two to be comparable. The selection team prepared a recommendation memorandum, which a Fiscal Service assistant commissioner signed in June 2014. Fiscal Service and Comerica executed the financial agency agreement in July 2014. As discussed in the report, we reviewed Fiscal Service’s administrative records for four FASPs conducted between 2010 and 2015, including the FASP for the myRA program. Under the 2010 FASP guidance, Fiscal Service’s program offices were required to maintain an administrative record comprised of documents generated during a FASP that describes and supports the decision-making process. We found that the myRA administrative record contained 6 of the 11 types of documents listed in the guidance, such as the solicitation, memorandums of meeting with the financial institutions, the selection decision memorandum, and the financial agency agreement. While some documents were missing from the administrative record, changes to the 2015 FASP guidance should help Fiscal Service provide assurance that documentation is complete, as previously discussed. All Treasury employees, including Fiscal Service employees, are subject to the same conflict of interest requirements that apply to all executive branch employees, as discussed previously in this report. For example, employees meeting certain criteria must file financial disclosures, which are reviewed internally by attorneys, and take annual ethics training. In addition, Fiscal Service has an employee conduct policy, which addresses outside activities, gifts, and other topics relevant to conflicts of interest. The 2015 FASP guidance states that employees involved in selecting or designating a financial agent should complete ethics training before their involvement in a FASP and sign a conflict-of-interest statement before evaluating financial agent proposals. Under the terms of its financial agency agreement, the financial agent for myRA owes a fiduciary duty of loyalty and fair dealing to the United States when acting as a financial agent of the United States and agrees to act at all times in the best interests of the United States when carrying out its responsibilities under the agreement. Treasury officials said that if a financial agent faced a conflict of interest under its agreement, the agent would have a duty to disclose and address that conflict. Based on a recommendation recently made by the Treasury Inspector General, Fiscal Service amended its model financial agency agreement to include a provision requiring the financial agent to notify the Inspector General if it becomes aware of any possible violation of federal criminal law regarding fraud, conflict of interest, bribery, or illegal gratuities affecting services performed under the financial agency agreement. The financial agency agreement for myRA includes this provision. Once myRA accountholders reach a limit of $15,000 in their account or the account reaches a maturity of 30 years, they are required to roll over their account into another retirement savings account. Fiscal Service officials told us that to address concerns that the financial agent would try to promote its own products to myRA accountholders, the financial agency agreement includes additional controls that place limits on the financial agent’s ability to cross-market its own products to accountholders so that, for instance, the financial agent would not be able to steer accountholders to its own products when they are required to roll over their account. In addition to the contact named above, Richard Tsuhara (Assistant Director), Heather Chartier (Analyst-in-Charge), William R. Chatlos, Jeffrey Harner, Colleen Moffatt Kimer, Marc Molino, Patricia Moye, and Jennifer Schwartz made key contributions to this report.
Under the National Bank Act and other statutes, Treasury is authorized to designate certain financial institutions as depositaries of public money and financial agents of the federal government. Treasury uses financial agency agreements to designate financial agents. In 2004, Congress provided Treasury with a permanent, indefinite appropriation to reimburse financial agents for their services, which replaced its use of non-appropriated funds. GAO was asked to review Treasury's use of financial agents. This report examines (1) how Treasury's use and compensation of financial agents has changed as it has modernized its payment and collection systems and (2) Fiscal Service's process and related internal controls for selecting and designating financial agents. GAO examined documents on Treasury's programs using financial agents; budget and other data on financial agent compensation; and laws and regulations governing the use of financial agents. GAO also reviewed Fiscal Service's FASP guidance and internal records supporting its selection and designation of five financial agents between 2010 and 2015. GAO interviewed Fiscal Service officials about its FASP and its use of financial agents. The Department of the Treasury's (Treasury) use of financial agents has evolved as it has moved from paper to electronic transactions in response to changes in technology and new laws. Treasury has a long history of using financial agents to support its core functions of disbursing payments and collecting revenue. Since the 1980s, Treasury has used agents to move from paper to electronic transactions as it has modernized its systems. For example, Treasury began using financial agents to collect tax revenue electronically in response to a 1984 law and to make payments electronically in response to a 1996 law. Such changes have continued since Congress enacted a permanent, indefinite appropriation in 2004 for Treasury to reimburse financial agents, after which Treasury began including in its annual budget the total amount paid to financial agents. Compensation to financial agents has grown from $378 million in fiscal year 2005 to $636 million in fiscal year 2015, partly due to increases in the number of debit and credit card payments made to federal agencies that are processed by financial agents. While Treasury discloses in its annual budget the total amount paid to financial agents, it has not fully disclosed in a central location information about individual agents, including their compensation and services provided. Treasury officials said they are not required and have not determined a need to publicly disclose compensation under each financial agency agreement. According to an Office of Management and Budget directive on open government, transparency promotes accountability by providing the public with information about government activities. Greater disclosure and transparency could enhance the accountability of Treasury's use of financial agents by informing the public and Congress about how much and for what purposes it is spending federal funds to obtain services from financial agents. The Bureau of the Fiscal Service (Fiscal Service)—the largest user of financial agents within Treasury—developed its financial agent selection process (FASP) guidance to document the steps and internal controls that its program offices generally are expected to follow in selecting and designating financial agents. The guidance provides assurances that a FASP is effective and efficient, documents key information, and complies with applicable laws and regulations. The guidance directs program offices to maintain an administrative record of key documents generated during a FASP. GAO selected five financial agents designated between 2010 and 2015 to review their administrative records but could review only four because the record for one was not created. None contained all the documents listed in the guidance, but three contained the majority. For example, the record for my RA®, a new retirement savings program using a financial agent to provide custodial services, contained 6 of 11 key documents—missing, for example, certain planning and approval documents. As a result, the records varied in the extent to which they complied with Fiscal Service's guidance, including controls. In November 2015, Fiscal Service revised its guidance to require not only program offices to deliver an electronic copy of their administrative records to the Bank Policy and Oversight (BPO) Division but also BPO to use a checklist to ensure that the records are complete. The 2015 guidance was not in effect for the records GAO reviewed. However, BPO's implementation of the new procedure should provide assurances that future designations are in compliance with the FASP guidance, including controls. GAO recommends that Treasury publicly disclose in a central location information about its financial agents, including their compensation and services provided. Treasury agreed with GAO's recommendation and provided technical comments, which were incorporated as appropriate.
The Ford class features a number of improvements over existing aircraft carriers that the Navy believes will improve the combat capability of the carrier fleet while simultaneously reducing acquisition and life cycle costs. These improvements include an increased rate of aircraft deploying from the carrier (sorties), reduced manning, significant growth in electrical generating capability, and larger service life margins for weight and stability to support future changes to the ship during its expected 50-year service life. To meet its requirements, the Navy developed over a dozen new technologies for installation on Ford-class ships (see appendix II). For example, advanced weapons elevators, using an electromagnetic field to transport weapons within the ship instead of cables, are expected to increase payload capacity by 229 percent as compared to Nimitz-class carriers, while also facilitating reduced manning and higher sortie generation rates. Other technologies allowed the Navy to implement favorable design features into the ship, including an enlarged flight deck, a smaller, aft-positioned island, and a flexible ship infrastructure to accommodate changes during the ship’s service life. As we have previously reported, of the critical technologies, three have presented some of the greatest challenge during development and construction: Electromagnetic Aircraft Launch System (EMALS), which uses an electrically generated moving magnetic field to propel aircraft that places less physical stress on aircraft as compared to legacy steam catapult launchers on Nimitz-class carriers. Advanced Arresting Gear (AAG) is an electric motor based aircraft recovery system that rapidly decelerates an aircraft as it lands. AAG replaces legacy hydraulic arresting equipment currently in use on Nimitz-class carriers. Dual Band Radar (DBR) integrates two component radars—the multifunction radar and the volume search radar—to conduct air traffic control, ship self-defense, and other operations. The multifunction radar includes horizon search, surface search, navigation, and missile communications. The volume search radar includes long-range, above horizon surveillance and air traffic control capabilities. As is typical in Navy shipbuilding, Ford-class carrier construction occurs in several phases and includes several key events, including the following: Pre-construction and planning: Long-lead time materials and equipment are procured and the shipbuilder plans for beginning ship construction. Block fabrication, outfitting, and erection: Metal plates are welded together to form blocks, which are the basic building components of the ship. The blocks are assembled and outfitted with pipes, brackets for machinery or cabling, ladders, and any other equipment that may be available for installation. Groupings of blocks form superlifts, which are then lifted by crane into dry dock and welded into the respective location of the ship. Launch: After the ship is watertight, it can be launched—floated in the water—then towed into a quay or dock area where remaining construction and outfitting of the ship occurs. Shipboard testing: Once construction and system installations are largely complete, the builder will test the ship’s hull, mechanical and electrical systems, and key technologies to demonstrate compliance with ship specifications and provide assurance that the items tested operate satisfactorily within permissible design parameters. Delivery: Once the Navy is satisfied that the ship is seaworthy and the shipbuilder has met requirements, the shipyard transfers custody of the ship to the Navy. Post-delivery activities: After ship delivery, tests are conducted on the ship’s combat and mission-critical systems, the ship’s air wing— consisting of the assigned fixed and rotary wing aircraft, pilots, support and maintenance personnel—is brought onto the ship, and the crew begins training and operating the ship while at sea. A period of planned maintenance, modernization, and correction of government-responsible deficiencies follows—referred to as Post Shakedown Availability. Deployment ready: The last stage of the ship acquisition process occurs when all crew and system operational tests, trainings, and certifications have been obtained and the ship has achieved the necessary level of readiness needed to embark on its first deployment. During and after construction, DOD acquisition policy requires major defense programs, including shipbuilding programs, to execute and complete several types of testing, while the ship progresses toward operational milestones including the point during the acquisition process when the fleet initially receives and maintains the ship: Developmental testing is intended to assist in the maturation of products, product elements, or manufacturing or support processes. For ship technologies, developmental testing typically includes land- based testing activities prior to introducing a new technology in a maritime environment and commencing with shipboard testing. Developmental testing does not include testing systems in concert with other systems. Integration testing is intended to assess, verify, and validate the performance of multiple systems operating together to achieve required ship capabilities. For example, integration testing would include among other things, testing the operability of the DBR in a realistic environment where multiple antennas and arrays are emitting and receiving transmissions and multiple loads are placed upon the ship’s power and cooling systems simultaneously. Initial Operational Test and Evaluation (IOT&E) is a major component of post-delivery testing intended to assess a weapon system’s capability in a realistic environment when maintained and operated by sailors, subjected to routine wear-and-tear, and employed in combat conditions against simulated enemies. During this test phase, the ship is exposed to as many actual operational scenarios as possible to reveal the weapon system’s capability under stress. The Navy schedules and plans these test phases and milestones using a test and evaluation master plan (TEMP) that is approved by the Deputy Assistant Secretary of Defense for Developmental Test and Evaluation (DT&E) and the Director for Operational Test and Evaluation (DOT&E). The Deputy Assistant Secretary of Defense for DT&E leads the organization within the Office of the Secretary of Defense that is responsible for providing developmental test and evaluation oversight and support to major acquisition programs. The Director, DOT&E leads the organization within the Office of the Secretary of Defense that is responsible for providing operational test and evaluation oversight and support to major defense acquisition programs. Due to their vast size and complexity, aircraft carriers require funding for design, long-lead materials, and construction over many years. To accomplish these activities on the Ford class, the Navy has awarded contracts for two phases of construction—construction preparation and detail design and construction—which are preceded by the start of advance procurement funding. Since September 2008, Newport News Shipbuilding has been constructing CVN 78 under a cost-reimbursement contract for detail design and construction of CVN 78. This contract type places significant cost risk on the government, which may pay more than budgeted should costs be more than expected. The Navy now expects to largely repeat the lead ship design for CVN 79, with some modifications, and construct that ship under a fixed-price incentive contract, which generally places more risk on the contractor. To ensure the Navy adheres to its cost estimates, Congress, in the National Defense Authorization Act for Fiscal Year 2007, established a $10.5 billion procurement cost cap for CVN 78, and an $8.1 billion cost cap for each subsequent carrier.the cost cap are necessary, it must first obtain statutory authority from Congress, which means it would be required to submit a proposal to Congress increasing the cost cap. The 2007 legislation also contains six provisions that allow the Navy to make adjustments to the cost cap (increasing or decreasing) without seeking statutory authority: If the Navy determines adjustments to cost changes due to economic inflation; costs attributable to shipbuilder compliance with changes in Federal, State, or local laws; outfitting and post-delivery costs; insertion of new technologies onto the ships; cost changes due to nonrecurring design and engineering; and costs associated with correction of deficiencies that would otherwise preclude safe operation and crew certification. The National Defense Authorization Act for Fiscal Year 2014 further expanded the list of allowable adjustments, solely for CVN 78, to include cost changes due to urgent and unforeseen requirements identified during shipboard testing. Since 2007, the Navy has sought and been granted adjustments to CVN 78’s cost cap to the current amount of $12.9 billion, which were attributed to construction cost overruns and economic inflation. In 2013, the Navy increased CVN 79’s cost cap to $11.5 billion, citing inflation and additional non-recurring design and engineering work. Subsequently, the National Defense Authorization Act for Fiscal Year 2014 increased the legislated cost cap for any follow-on ship in the Ford-class to $11.5 billion. In addition, the Navy delayed CVN 79’s delivery by 6 months, from September 2022 to March 2023, to reflect changes in the ship’s budget. Figure 1 outlines the Navy’s acquisition timeline for the Ford class, along with adjustments made to the legislated cost cap throughout the course of the shipbuilding program. In August 2007 and September 2013, we reported on the programmatic challenges associated with technology development, design, construction, and testing of the lead ship (CVN 78). In our 2007 report, we noted that delays in Ford-class technology development and overly optimistic cost estimates would likely result in higher lead ship costs than what the Navy allotted in its budget. We recommended actions to improve the realism of the CVN 78 budget estimate and the Navy’s cost surveillance capacity, as well as develop carrier-specific tests of the DBR to ensure the radar meets carrier-specific requirements. The Navy addressed some, but not all, of our recommendations. Our 2013 report found delays in technology development, material shortfalls, and construction inefficiencies were contributing to increased lead ship construction costs and potential delays to ship delivery. We also found the Navy’s ability to demonstrate CVN 78’s capabilities after delivery was hampered by test plan deficiencies, and reliability shortfalls of key technologies could lead to the ship deploying without those capabilities. Lastly, we concluded that ongoing uncertainty in CVN 78’s construction could undermine the Navy’s ability to realize additional cost savings during construction of CVN 79—the follow on ship. These findings led to several recommendations to DOD: conduct a cost-benefit analysis on required CVN 78 capabilities, namely reduced manning and the increased sortie generation rate, in light of known and projected reliability shortfalls for critical systems; update the Ford-class program’s test and evaluation master plan to allot sufficient time after ship delivery to complete developmental test activities prior to beginning integration testing; adjust the planned post-delivery test schedule to ensure that system integration testing is completed before IOT&E; defer the CVN 79 detail design and construction contract award until land-based testing for critical systems is complete; and, update the CVN 79 cost estimate on the basis of actual costs and labor hours needed to construct CVN 78. While DOD agreed with some of our recommendations, it did not agree with our recommendation to defer the award of CVN 79’s detail design and construction contract until certain testing of critical technology systems were completed, noting that deferring contract award would lead to cost increases resulting from the required re-contracting effort, among other things. Shortly after we issued our report, however, the Navy postponed awarding the construction contract until the first quarter of fiscal year 2015, citing the need for additional time to negotiate more favorable pricing with the shipbuilder as well as for the shipbuilder to continue to implement and demonstrate cost savings. The extent to which CVN 78 will be delivered within the Navy’s revised schedule and cost goals is dependent on deferring work and costs to the ship’s post-delivery period. Meeting CVN 78’s current schedule and cost goals will require the shipbuilder to overcome lags in the construction schedule. Successful tests of the equipment and systems now installed on the ship (referred to as shipboard testing) will also be necessary. However, challenges with certain key technologies are likely to further exacerbate an already compressed test schedule. With the shipbuilder embarking on one of the most complex phases of construction with the greatest likelihood for cost growth, cost increases beyond the current $12.9 billion cost cap appear likely. In response, the Navy is deferring work until after ship delivery to create a reserve to help ensure that funds are available to pay for any additional cost growth stemming from remaining construction risks. In essence, the Navy will have a ship that is less complete than initially planned at ship delivery, but at a greater cost. The strategy of deferring work will result in the need for additional funding later, which the Navy plans to request through its post-delivery and outfitting budget account—Navy officials view this plan as an approach to managing the cost cap. However, increases to the post-delivery and outfitting budget account are not captured in the total end cost of the ship, thereby obscuring the true costs of the ship. The shipbuilder appears to have resolved many of the engineering and material challenges that we reported in September 2013. These challenges resulted in inefficient and out-of-sequence work that led to a revision of the construction and shipboard test schedules and contributed to an increase to the ship’s legislated cost cap from $10.5 billion to the current $12.9 billion.remaining to complete construction and the shipboard test program under way, the lagging effect of these issues is creating a backlog of construction activities that further threaten the ship’s revised delivery date and that may lead to further increased costs. As we have found in our previous work, additional cost increases are likely to occur because the remaining work on CVN 78 is generally more complex than much of the work occurring in the earlier stages of construction. Nevertheless, with about 20 percent of work As shown in table 1, the shipbuilder continues to face a backlog of construction activities, including completing work packages, which are sets of defined tasks and activities during ship construction and are how the shipbuilder manages and monitors construction progress through the construction master schedule; outfitting of individual compartments on the ship; and transferring custody of completed compartments and hull, mechanical, and electrical systems to the Navy, referred to as “compartment and system turnover.” As the shipbuilder completes construction and compartment outfitting activities, the shipboard testing phase of the project commences. This testing is scheduled to be completed by early February 2016 on the ship’s hull, mechanical, and electrical systems, about 2 months before the anticipated end of March 2016 delivery date. The shipboard test program is meant to ensure correct installation and operation of the equipment and systems in a maritime environment. This is a complex and iterative process that requires sufficient time for discovering problems inherent with the start-up and initial operation of a system, performing corrective work, and retesting to ensure that the issues have been resolved. However, as a result of previous schedule delays, the shipbuilder compressed the shipboard test plan, resulting in a schedule that leaves insufficient time for discovery and correction should problems arise. Further, the construction delays discussed above directly affect the builder’s ability to test the ship’s hull, mechanical, and electrical systems, thus increasing the likelihood of additional testing delays. For example, testing of the ship’s fire sprinklers was delayed because construction of the sprinkling system was not completed on time. In other instances, delays stemming from construction can have a cascading effect on the test program. As another example, testing of the ship’s plumbing fixtures was delayed until testing of the potable water distribution system was completed and the system activated. Another integral part of the shipboard test program is testing the ship’s key technologies, many of which are being operated for the first time in a maritime environment, and ensuring that these technologies function as intended. Four of these technologies are instrumental in executing CVN 78’s mission—AAG, EMALS, DBR, and the advanced weapons elevators. Although these technologies are, for the most part, already installed on the ship, certain technologies are still undergoing developmental land- based testing. Except for the advanced weapons elevators, which are managed by the shipbuilder, the other technologies are being developed by separate contractors, with the government providing the completed system to the shipbuilder for installation and testing. The shipboard test programs for EMALS and the advanced weapons elevators are currently under way, while AAG and DBR testing is scheduled to commence in fiscal year 2015. However, developmental testing for AAG, EMALS, and DBR is taking place concurrently at separate land-based facilities (as well as aboard the ship). This situation presents the potential for modifications to be required for the shipboard systems that are already installed if land- based testing reveals problems. Three of the systems we reported on in our last report in September 2013—AAG, EMALS, and DBR—have since experienced additional developmental test delays (as shown in figure 2). Following is more information on the status of testing of these key technologies. Shipboard testing for AAG is scheduled to begin in March 2015, but according to the CVN 78 program office, the AAG contractor is redesigning equipment on the system’s hydraulic braking system by adding additional filtration and the shipbuilder is replacing associated piping, which will likely delay the start of system testing. In addition, the AAG contractor has to complete over 50 modifications to the systems before shipboard testing can begin; these modifications are needed to address issues identified during developmental testing at the land-based test site. As we previously found, AAG experienced several failures during land-based testing, which led to redesign and modification of several subsystems, most notably the water twisters—a device used to absorb energy during an aircraft arrestment. CVN 78 program officials expressed concerns that the rework cannot be completed on time to support the current shipboard test schedule, and attribute the delays to the immaturity of AAG when it was installed on the ship. The shipboard test program is further at risk because additional design changes and modifications to the shipboard AAG units remain likely. This is because the Navy will now be conducting land-based testing of AAG even as shipboard testing is under way. As a result of issues discussed above, the Navy further delayed the schedule for land-based testing (as shown in figure 2) and changed the test strategy to better ensure that it could meet the schedule for testing live aircraft aboard the ship. AAG’s previous land-based test plan was to sequentially test each aircraft type planned for CVN 78 as a simulated load on a jet car track. After completing jet car track testing for all aircraft types, the actual aircraft were to be tested with the AAG system on a runway. This strategy allowed for discovery of issues with each aircraft type prior to advancing to the next stage of testing. However, earlier this year the AAG program office changed its strategy so that each aircraft type will be tested sequentially at the jet car track and runway sites. Once an aircraft completes both types of testing, testers will re-configure the sites to test the next type of aircraft, according to AAG program officials. Figure 3 shows the difference in AAG test strategies along with the overall ship test schedule. The program office plans to complete this revised testing approach with the F/A-18 E/F Super Hornet fighter first, as this aircraft will be most in use aboard the carrier. While the Navy stated that this change was necessary to ensure that at least one aircraft type would be available to certify the system for shipboard testing, it further increases the potential for discovering issues well past shipboard testing and even ship delivery. The shipbuilder began EMALS activation and shipboard testing activities in August 2014, as planned. This is the first time EMALS is being operated and tested in a maritime environment, in a multiple catapult configuration, using a shared power source, with multiple electromagnetic fields. Any additional delays with the EMALS shipboard test schedule will directly affect CVN 78’s delivery date. Specifically, a key aspect of the test program is testing the system’s launch capabilities by using weighted loads that simulate an aircraft—referred to as dead-loads—off of the flight deck of the carrier. This test must be completed by November 2015, the point at which the shipbuilder is scheduled to turn the front of the ship toward the dock to begin testing the ship’s propulsion system in preparation for subsequent sea trials. At the same time, land-based testing for EMALS is still on-going and the Navy now anticipates testing will be completed during the third quarter of fiscal year 2016. Shipboard testing is scheduled to begin in January 2015, but according to the CVN 78 program office, the DBR contractor must first make 5 modifications to the installed radar system prior to its initial activation. In particular, the power regulating system needs to be modified, which requires removal, modification, and re-installation of certain power control modules. Shipbuilder officials told us that any delay to the installation of these items will likely affect the DBR shipboard test schedule, but according to the DBR program office, software and hardware modifications to correct this issue are complete and the ship-set units are in production. Program officials do not anticipate additional changes to the system’s hardware prior to commencing shipboard testing, but they do expect further software modifications as land-based development testing progresses. As a result, there is the risk that additional modifications to the shipboard DBR system will be required. In addition, land-based testing of the DBR is based on a conglomeration of engineering design models that is not representative of the version of the radar installed on the ship, which further increases the likelihood that shipboard testing will require more time and resources than planned. Shipboard testing of components to the advanced weapons elevators began in February 2012, but testing has not proceeded as planned. As of August 2014, the shipbuilder had operated 4 of the ship’s 11 weapons elevators, but testing delays have occurred due to faulty components and software integration challenges, and premature corrosion of electrical parts. The shipbuilder has increased the amount of construction labor allocated to the weapons elevators in an effort to recover from these schedule delays. CVN 78’s schedule has limited ability to absorb the additional delays that appear likely, given the remaining construction and testing risks. A delay in the ship’s planned March 2016 delivery could result in a breach of DOD’s acquisition policy. Among other things, a breach would require the CVN 78 program manager to seek approval from the Navy and DOD to further revise the schedule. Shipbuilder officials maintain that they can meet the ship’s revised delivery date, but acknowledge that the revised shipboard test plan is proving challenging because of delays associated with construction and concurrent developmental testing of key technologies discussed above. To regain lost schedule, the shipbuilder may choose to expend additional labor hours by paying workers overtime or hiring subcontracted labor; however, these actions would result in additional and unanticipated costs. The CVN 78 program’s costs are approaching the legislative cost cap budget of $12.9 billion, but further cost growth is likely based on performance to date as well as ongoing construction, shipboard testing and technology development risks. To improve the likelihood of meeting the March 2016 delivery date and to compensate for potential cost growth, the Navy is (1) removing work from the scope of the construction contract and (2) deferring purchase and installation of some mission- related systems provided by the government to the shipbuilder until after ship delivery. Consequently, completion of CVN 78 may not occur until years later than initially planned. According to the CVN 78 program office, this approach creates a funding reserve to cover cost growth due to unknowns in the shipboard test program, particularly given that many of the ship’s systems are being operated and tested for the first time in a maritime environment. However, the value of the deferred work may not be adequate to fully fund all remaining costs needed to produce an operational ship. Table 2 shows the type of work being deferred from the current plan to post-delivery, and the program office’s estimated value of the work. As of September 2014, program officials said they are still negotiating with the shipbuilder on the dollar value of construction labor that it plans to descope from CVN 78’s construction contract. The program office plans to use this approximately $96 million reserve in the likely event there is additional cost growth above the $12.9 billion budgeted cost cap. However, given the on-going construction and testing risks previously discussed, this cash reserve is unlikely to be adequate to cover the entire expected cost growth of the ship. As shown in Table 3, the shipbuilder, CVN 78 program office, and the Naval Sea Systems Command Cost Engineering Office (the Navy’s cost estimators), are all forecasting a cost overrun at ship completion ranging from $780 million to $988 million. According to shipbuilder and CVN 78 program office estimates, the program will meet the $12.9 billion legislated cost cap and has sufficient funds to cover the anticipated cost overruns. If, however, costs increase according to the Naval Sea Systems Command Cost Engineering Office’s estimate or higher, additional funding will be needed above the cash reserve amount. Further, cost and analyses offices within the Office of the Secretary of Defense have tracked the ship’s costs for several years and report that without significant improvements in the program’s overall cost performance, CVN 78’s total costs will likely exceed the program’s $12.9 billion cost cap by approximately $300 million to $800 million.fall within this range, the Navy will need to either defer additional work to post-delivery or request funding under the ship’s procurement budget line above the $12.9 billion cap. Under the cost cap legislation, such an action would require prior congressional approval. To fund work deferred to the post-delivery period in the event of unbudgeted cost growth, the CVN 78 program office is considering using funding from the Outfitting and Post-Delivery budget account. Program officials noted that other Navy shipbuilding programs have also used funds from the outfitting and post-delivery accounts to complete deferred construction work. Navy officials view this as an approach to managing the cost cap. At the same time, however, because the Navy considers post-delivery and outfitting activities as “non end-cost” items—meaning that funds from this account are not included when calculating the total construction cost of the ship—visibility into the ship’s true construction cost is obscured. CVN 78 will not demonstrate its required capabilities prior to deployment because it cannot achieve certain key requirements according to its current test schedule. Specifically, the ship will not have demonstrated its increased sortie generation rate (SGR), due to low reliability levels of key aircraft launch and recovery systems, and required reductions in personnel remain at risk. The Navy expected both of these requirements to contribute to greater capability and lower costs than Nimitz-class carriers. Further, the ship is likely to face operational shortfalls resulting from a ship design that restricts accommodations. Finally, tight time frames for post-delivery testing of key systems due to aforementioned technology development delays could result in the ship deploying without fully tested systems if deployment dates remain unchanged. The Navy’s business case for acquiring the Ford-class depended on significantly improved capabilities over legacy Nimitz-class carriers, specifically an increased SGR and reduced manning profile. The Navy anticipated that these capabilities would reduce total ownership costs for the ship. Our September 2013 report found several shortfalls in the Navy’s projections for meeting the SGR and reduced manning requirements, and our current work found continuing problems in these areas. The Navy used the SGR requirement to help guide ship design, but CVN 78 will not be able to fully demonstrate this capability before the ship is deployment ready. As shown in table 4, CVN 78’s SGR requirements are higher than the demonstrated performance of the Nimitz-class. The increased SGR requirement for the Ford-class reflected earlier DOD operational plans to mount campaigns in two theaters simultaneously. Under this scenario, a high SGR was essential to quickly achieving warfighting objectives, but according to Navy officials, this requirement is no longer reflective of current operational plans. The Navy plans to demonstrate CVN 78’s SGR requirement using a modeling and simulation program in 2019, near the end of CVN 78’s IOT&E period. As the modeling and simulation program continues to mature and develop, the Navy, according to the TEMP, plans to collect data from a sustained and surge flight operation and then incorporate these data into the model. Once this is completed and the model is accredited, the Navy will subsequently run a simulation of the full SGR mission. Current runs of the model indicate the ship can meet the required sustained and surge sortie rates, which Navy and shipbuilding officials involved with the modeling and simulation effort explained is primarily due to flight deck redesign and not the ship’s new aircraft launch and recovery technologies. However, ongoing issues with the development of EMALS and AAG are resulting in low levels of system reliability that will be a barrier to achieving required SGR rates once the model is populated with actual data from these technologies. System reliability is critical to the carrier’s ability to meet the SGR requirement and is measured in terms of mean cycles between critical failures, or the average number of times each system launches or recovers aircraft before experiencing a failure. As shown in table 5, the most recent available metrics from January 2014 show that EMALS and AAG show such low reliability rates that it is unlikely that these systems will achieve reliability rates needed to support SGR requirements before the demonstration event in 2019 or for years after the ship is deployment ready. As a result of these systems’ low reliability, we questioned the Navy’s sortie generation requirement in our September 2013 report and recommended that the Navy re-examine whether it should maintain this requirement or modify it—seeking requirements relief from the Joint Requirements Oversight Council if the Navy found it was not operationally necessary. DOT&E has also raised questions about the need for increased sortie generation. DOT&E analyzed past aircraft carrier operations in major conflicts and reported that the CVN 78 SGR requirement is well above historical levels. In its January 2014 annual report, DOT&E cited the poor reliability of critical systems, such as EMALS and AAG, noting that performance of these systems could cause a series of delays during flight operations that could make the ship more vulnerable to attack. DOT&E plans to assess CVN 78 performance during IOT&E by comparing its demonstrated SGR to the demonstrated performance of the Nimitz-class carriers. Although the carrier would not meet its required capability, DOT&E stated that a demonstrated SGR less than the CVN 78 requirement, but equal to or greater than the performance of the Nimitz-class, could potentially be acceptable. However, the Navy would still be required to obtain approval from the Joint Requirements Oversight Council to lower the requirement. Another CVN 78 key performance requirement is a reduced ship’s force, relative to the Nimitz-class, with the goal of lowering total operational costs. “Ship’s force” refers to all personnel aboard a carrier except those designated as part of the air wing and in certain support or other assigned roles. The Navy’s reduced manning requirement for CVN 78 is a ship’s force that has 500 to 900 fewer personnel than Nimitz-class carriers. Table 6 compares manning totals for the Nimitz class with Ford-class manning projections. As of September 2014, the Navy projects a 663-sailor reduction in the ship’s force, which represents a 163-person margin over the minimum required reduction of 500 personnel. But our analysis found that the carrier is not likely to achieve this level of reduction and still meet its intended capabilities. Key factors contributing to the difficulties in meeting the reduced manning requirement include the following: Poor reliability of key systems—including EMALS and AAG—and sailors’ limited experience in operating these systems in a maritime environment, which may require additional personnel. For example, AAG will require additional maintenance than planned due to changes to the system’s hydraulic braking system, according to Navy officials. Additional ship’s force personnel will be needed to meet the surge SGR of 270 sorties per day, based on the Navy’s most recent operational test and evaluation force assessment. Additional operational personnel, particularly in the supply department, will likely be needed on the ship, according to the CVN 78 pre-commissioning unit—the crew assigned to the ship while it is under construction. These factors are likely to increase the total number of personnel on CVN 78. As a reflection of the Navy’s confidence in reducing manning on the Ford class, the ships were designed with significantly fewer berths (4,660) as compared to the Nimitz class to accommodate the ship’s force, air wing, and all other embarked personnel. However, now the number of berthings is fixed and the ship cannot accommodate additional manpower without significant design changes. Further, the Navy requires new ship designs, including CVN 78, to provide a habitability margin—a percentage of extra berths above the projected ship’s force to accommodate potential personnel growth throughout the service life of the ship. This margin includes berths as well as support services for personnel aboard the ship, such as food and sanitation facilities. Given current manning projections and available accommodations, as shown in table 7, the Navy recognizes that CVN 78 falls well short of meeting its required habitability margin. This required margin is equivalent to 10 percent of the ship’s force or 263 berths. As a result, the CVN 78 program office plans to request a waiver for this requirement from the Chief of Naval Operations. In fact, the carrier currently has so few extra berths that it can only accommodate a slight increase in personnel. And the Navy’s estimated accommodation needs do not take into account the likelihood that additional personnel will be needed above and beyond the Navy’s current projected ship’s force (2,628 sailors). In addition, spare berthing is also used for personnel temporarily assigned to the ship, such as inspectors, trainers, or visitors. If CVN 78 must enlarge its ship’s force as well as accommodate personnel temporarily assigned to the ship, it is likely that no actual accommodations would be available. Consequently, CVN 78 must be “manning neutral,” so that personnel coming aboard must be matched by personnel debarking, in accordance with the ship’s operational needs and personnel specialties. This situation is further exacerbated because the Navy will need to operate CVN 78 with a greater percentage of its crew than the Nimitz class. According to the Navy’s most recent (2011) analysis of manning options for CVN 78, staffing the ship at less than 100 percent; that is, with fewer personnel than the current projected total force of 4,533, had an adverse effect on quality of life at sea because the crew had to perform additional duties or remain on duty for longer periods. This manning analysis also found that reducing staffing to 85 percent—which is typical for a Nimitz-class ship—compromised ship operations. The analysis concluded that careful management of personnel specializations will be needed and recommended cross-training personnel in key departments to minimize the risk to ship operations. Future costs for the ship could also increase if the Navy must eventually convert spaces to accommodate additional berthing. The Navy has further compressed post-delivery plans to test CVN 78’s capabilities and increased concurrency between test phases since our last report in September 2013. This means that there will be less time for operational testing, which is the Navy’s opportunity to test and evaluate the ship against realistic conditions before its first deployment. As we reported in September 2013, the Navy added in 2012 an additional integration test period to the CVN 78 TEMP as recommended by the Deputy Assistant Secretary of Defense for DT&E and the Director, DOT&E. This integration testing is important because it allows ship systems still in development—such as EMALS and AAG—to be tested together in their shipboard configuration. In our report, we recommended that the Navy adjust its planned post-delivery test schedule to complete this integration testing before commencing IOT&E. The Navy did not agree and overlap between integration testing and IOT&E remains and is now longer. This situation constrains the Navy’s ability to discover and resolve problems during the integration testing phase and before beginning IOT&E, which further limits opportunities for the Navy to resolve problems discovered during testing and risks additional discovery during IOT&E. In addition, the Navy and DOD still have not resolved whether CVN 78 will be required to conduct the Full Ship Shock Trial for the Ford-class. As we reported last year, the program office deferred this testing to the follow-on ship, CVN 79; a strategy that did not receive DOT&E approval. According to program officials, final determination of whether the trial will be conducted on CVN 78 or CVN 79 will be made by the Under Secretary of Defense for Acquisition, Technology, and Logistics near the end of 2014. Since our last report, the Navy doubled the length of the new integration testing period, but clarified that this testing also includes ongoing developmental testing of key systems, assessment of prior test results, and repairs or changes to fix deficiencies identified in earlier test periods. In fact, the Navy plans to conduct well over a dozen certifications and major ship test events during this period. For example, it plans to conduct a total ship survivability trial—testing CVN 78’s capability to recover from a casualty situation and the extent of mission degradation in a realistic operational combat environment. If the Navy discovers significant issues during testing, or events cause additional delays to testing, it will have to choose whether deploy a ship without having fully tested systems or delay deployment until testing is complete. To help manage this risk, the Navy plans to divide operational testing into two phases. According to program officials, this approach will allow developmental testing, deficiency correction, and integration testing to continue on the mission-related systems installed after ship delivery and on those systems that are not required to support the first phase of operational testing. The first phase of operational testing will focus on testing the ship’s ability to accomplish basic tasks by stressing the ship’s crew, aviation facilities, and the combat and mission-related systems installed prior to delivery under realistic peacetime operating conditions. The second phase of operational testing incorporates embarked strike groups and other detachments that support operations and tests CVN 78’s ability to conduct major combat operations, particularly the tactical employment of the air wing in simulated joint, allied, coalition, and strike group environments. The goal is to stress CVN 78’s aviation, combat and mission-related systems, particularly those systems installed after ship delivery. Figure 4 shows these changes to the CVN 78 post-delivery test schedule. The current test schedule is optimistic, with little room for delays that may occur as a result of issues identified during the integration and operational test phases. Even if the Navy meets the current schedule, it will not complete all necessary testing in the time remaining before the ship is deployment ready. This issue will be further exacerbated if land-based or shipboard testing discussed earlier reveals significant problems with the ship’s systems, as the time needed to address such issues may interfere with the ship’s integration and operational test phases. Navy officials responsible for operational testing stated that they will only conduct operational testing when shipboard systems are deemed ready. However, neither the CVN 78 program office nor the Navy’s operational test personnel know how often system testing can be deferred before affecting the schedule for operational testing on other systems, particularly given the interoperation of systems on a carrier. For example, the DBR supports ship combat systems and simultaneously conducts air traffic control. If it is not ready to support flight operations in the first segment of IOT&E, combat operations in the second segment that also rely on the radar are likely to be affected. To meet the $11.5 billion legislative cost cap for CVN 79, the Navy is assuming the shipbuilder will make efficiency gains in construction that are unprecedented for aircraft carriers and has proposed a revised acquisition strategy for the ship. With shipbuilder prices for CVN 79 growing beyond the Navy’s expectations, the Navy extended the construction preparation (CP) contract to allow additional time for the shipbuilder to reduce cost risks prior to awarding a construction contract. In addition, the Navy’s proposed revision to the ship’s acquisition strategy would reduce a significant amount of work needed to make the ship fully operational until after ship delivery. While this strategy may enable the Navy to initially achieve the cost cap and is allowed under the cost cap provision without the need for congressional approval, it also results in transferring the costs of planned capability upgrades—previously included in the CVN 79 baseline—to future maintenance periods to be paid through other (non-CVN 79 shipbuilding) accounts. The Navy’s $11.5 billion cost estimate for CVN 79 is underpinned by the assumption that the shipbuilder will significantly lower construction costs through realizing efficiency gains. While performance to date has been better than that of CVN 78, early indicators suggest that the Navy is unlikely to realize anticipated efficiencies at the level necessary to meet cost and labor hour reduction goals. In its May 2013 report to Congress on CVN 79 program management and cost control measures, the Navy stated that 15-25 percent fewer labor hours (about 7 million to 12 million hours) will be needed to construct CVN 79 as compared to CVN 78. Although the Navy and shipbuilder continue to look for labor hour reduction opportunities, thus far, shipbuilder representatives have identified improvements that they stated will save about 800,000 labor hours. As we identified in September 2013, many of the proposed labor hour reductions are attributed to lessons learned during construction of CVN 78 and revising CVN 79’s build plan to perform pre-outfitting work earlier in the build process. This is because work completed earlier in the build process, such as in a shop environment, is more efficient and less costly than work done later on the ship where spaces are more difficult to maneuver within. In addition, the shipbuilder’s revised build plan consolidates and increases the size of superlifts—fabricated units and block assemblies that are grouped together and lifted into the dry dock— to form larger sections of the ship. Other notable labor hour savings initiatives involve increased use of new welding technologies and improved cable installation techniques. Construction of CVN 79 is still in the initial stages, and most of the projected cost savings and labor hour reduction opportunities are in structural units and parts of the ship that are not yet under construction. However, there are indications that achieving the anticipated 7 million to 12 million hour reduction goal will be challenging. As of the end of March 2014, the shipbuilder had completed fabrication of 205 structural units— about 18 percent of the ship’s total—with over a hundred more in various stages of fabrication. Although the ship is still in the early stages of construction, the cumulative labor hour reductions for the completed units fell short of the Navy and shipbuilder’s expected reduction by about 3.5 percent, as shown in figure 5. Program officials stated that while the cumulative reduction has not yielded the expected results, a number of the structural units were completed prior to the shipbuilder’s implementation of labor saving initiatives. They further added that completed units, more representative of remaining work, have yielded approximately a 16 percent reduction in labor hours for fitters and welders. In addition, the shipbuilder’s scheduling processes may further limit insight into the effectiveness of these initiatives. We evaluated the shipbuilder’s processes and tools used to plan and schedule work against GAO’s best practices in scheduling. We identified scheduling practices that may interfere with the shipbuilder’s and Navy’s ability to accurately manage and monitor the construction schedule and the way in which the shipbuilder allocates labor, equipment, and material resources. In particular, the shipbuilder’s enterprise resource management system (which tracks use of labor and materials) and master construction schedule (which tracks the time required to complete work packages) are stand-alone, independent systems, which means that changes in one system are not automatically updated in the other. Consequently, the shipbuilder—and subsequently the government—lacks real time insight in to whether resources are being used according to schedule. This lack of insight limits management’s ability to effectively respond to delays, thus driving inefficiencies into the build process, and also limits the shipbuilder’s ability to take advantage of opportunities when work is completed ahead of schedule. Although the shipyard is transitioning to a new scheduling software program, the shipbuilder does not plan to revise its existing scheduling and resource management process to enable better insight for CVN 79. The legacy scheduling system the shipbuilder employed did not allow for data to be exported to the government. The new scheduling system has the ability to allow for increased Navy oversight since the data are exportable, thus allowing, among other things, the ability to independently examine the effects of schedule slippage or realism of the shipbuilder’s estimated labor needs. According to program officials, the Navy intends to incorporate this data as a deliverable item in the CVN 79 construction contract. Even with the shipbuilder’s improvements, reducing construction of CVN 79 by approximately 7 million to 12 million labor hours as compared to CVN 78 would be unprecedented in aircraft carrier construction. As shown in table 8, with each successive aircraft carrier build, the number of labor hours needed to complete construction has, at most, decreased by 9.3 percent as compared to the previous ship (with CVN 69 compared to CVN 68 accounting for the largest percentage decrease). Although CVN 78 and CVN 79 are similar to CVN 68 and CVN 69 in that there is a first-to-second ship of a class transition, in most instances sizeable labor hour reductions only occurred as a result of constructing two aircraft carriers though a single contract, rather than acquiring the ships individually through separate construction contracts as is the case with the Ford class. The Navy planned to award the CVN 79 detail design and construction contract in late fiscal year 2013, but subsequently delayed the award and extended the construction preparation contract because negotiations with the shipbuilder were taking longer than the Navy anticipated. As a result, the Navy now intends to award the detail and design contract at the end of the first quarter of fiscal year 2015, which program officials stated allows sufficient time to negotiate prices and demonstrate cost reductions and process improvements that will lead to lowering CVN 79’s construction costs. In the meantime, more work is now being completed under the construction preparation contract, with almost 60 percent of the ship’s total structural units under the CP contract, as shown in table 9 below. According to program officials, this work accounts for about 20 percent of the ship’s overall construction effort. By extending the CP contract, the program office expects that it will reduce material costs by 10-20 percent from CVN 78 and prevent late deliveries of items, such as valves, that led to significant material shortfalls and out-of-sequence construction work and contributed to that ship’s cost growth, as we noted in our September 2013 report. Under the Navy’s material procurement strategy, approximately 95 percent of CVN 79’s material to be procured by the shipbuilder was under contract as of September 2014. In addition, the Navy recently completed an affordability and capability review of CVN 79 in an effort to further reduce construction costs and shipbuilding requirements to ensure that it could meet the $11.5 billion cost cap—which Navy officials stated was otherwise unachievable. In response, the Navy plans to (1) institute cost savings measures by reducing some work and equipment; (2) revise the acquisition strategy to shift more work to post-delivery—including installation of mission systems—while still meeting statutory requirements for deploying CVN 79; and (3) deliver the ship with the same baseline capability as CVN 78—postponing a number of planned mission system upgrades and modernizations until future maintenance periods. Program officials told us they plan to seek approval to initiate these changes at CVN 79’s upcoming program review with the Office of the Secretary of Defense, which is now scheduled for December 2014, in advance of the detail design and construction contract award. Most notably, the Navy plans to depart from its planned installation of DBR on CVN 79, in favor of an alternative radar system, which it expects to provide a better technological solution at a lower cost. By seeking competitively awarded offers, Navy officials anticipate realizing savings of about $180 million for CVN 79. Final determination of CVN 79’s radar solution is not scheduled to occur until after March 2015, at least 3 months after the estimated detail design and construction contract award. It is around this time that the program office anticipates it will solicit proposals from prospective bidders. Program officials told us that they intend to work within the current design parameters of the ship’s island, which they say would limit extensive redesign and reconfiguration work to accommodate the new radar. While the extent of redesign work is unknown, such a change will still result in additional ship construction costs, which could offset the Navy’s estimate of DBR savings. Other cost savings measures are wide ranging and include eliminating one of the four AAG units planned for the ship (Nimitz- class carriers have 3 operational arresting units); eliminating redundant equipment requirements such as the ship’s emergency power unit for the steering gear and spare low pressure air compressors; and modifying test requirements for certain mechanical systems. In addition to these cost savings measures, the CVN 79 program office is proposing a two-phased approach for ship construction and delivery. Although the details of the Navy’s revised acquisition strategy continue to evolve, the basic premise is that delivery by the shipbuilder will consist of only the hull, mechanical and electrical aspects of the ship (referred to as phase I), followed by completion of remaining construction work and installation of the warfare and communications systems during the post- delivery period (referred to as phase II). At ship delivery, CVN 79 will have its full propulsion capability, as well as the core systems for safe navigation and crew safety; and necessary equipment to demonstrate flight deck operations, such as EMALS and AAG. All remaining construction work, primarily consisting of the procurement and installation of several warfare and communications systems, will be completed post- delivery. The program office currently plans to maintain the ship’s 2023 delivery date, but as shown in figure 6, the revised strategy extends the acquisition schedule and the ship’s deployment ready date by about 15 months. Program officials stated that despite this delay in the schedule it would still meet the statutorily required minimum number of operational aircraft carriers because CVN 79 would still be deployment ready shortly after USS Nimitz (CVN 68) is currently slated to retire in fiscal year 2025. As currently planned, the revised strategy, by design, will result in a less capable and less complete ship at delivery. According to CVN 79 program officials, reducing the shipbuilder’s scope of work, along with a reduction in some construction requirements, will lead to negotiating more favorable pricing of the detail design and construction contract. In addition, they noted that maintaining the current delivery schedule will deliberately allow for a slower pace of construction, thus potentially requiring less use of overtime or leased labor. Further, program officials state that delaying installation of warfare and communications systems—such as those systems with high obsolescence risk—can potentially limit procuring equipment that has been surpassed by technology advances by the time the ship begins phase II of the Navy’s revised strategy. Finally, Navy officials believe that adopting this approach will enable the program to reduce costs by introducing additional competition for the ship’s systems and installation work after delivery. While the two-phased strategy may enable the program to initially stay within the legislated cost cap, it will transfer the costs of a number of known capability upgrades previously included in the CVN 79 baseline to other (non-CVN 79 shipbuilding) accounts. As shown in table 10, the program office plans to defer installation of a number of systems to future maintenance periods. Based on current estimates, the value of the deferred systems is about $200 million - $250 million. Moreover, this strategy will result in deferring installation of systems and equipment needed to accommodate the carrier variant of the Joint Strike Fighter aircraft (F-35C) until fiscal year 2027 at the earliest. Further, should construction costs grow above estimates, the Navy may subsequently choose to use funding intended for phase II work to pay for construction cost increases without increasing the cost cap. The Navy would have this option because additional funding through post delivery budget accounts are not included in calculating the ship’s end cost, similar to the aforementioned situation with CVN 78. According to Navy officials, this approach allows the program to manage the cost cap without seeking statutory authority. Constructing and delivering an aircraft carrier is a complex undertaking. The Ford-class program, in particular, has faced a steep challenge due to the development, installation and integration of numerous technologies— coupled with optimistic budget and schedule. The Ford class is intended to provide significant operational advantages over the Nimitz class. However, with about 80 percent of the lead ship constructed, the program continues to struggle with construction inefficiencies, development issues, testing delays, and reliability shortfalls. These are issues that have been mounting for a number of years. Now, as the program embarks on its most challenging phase—shipboard testing—additional cost increases in excess of the $2.3 billion since 2009 appear likely. To manage this risk, the Navy is creating a cost buffer by deferring construction work and installation of mission-related systems to the post-delivery period. This strategy may provide a funding cushion in the near term, but it may not be sufficient to cover all potential cost increases. After raising the cost cap several times, the Navy is now managing the cost cap by reducing the scope of the delivered ship and is considering paying for the deferred scope through a budget account normally used for post-delivery activities. This contradicts the purpose of the congressional cost cap, which is to hold the Navy accountable for the total cost estimate for buying a deployable ship. Further, after an investment of at least $12.9 billion, CVN 78 may not achieve improved operational performance over the Nimitz class of aircraft carriers as promised for some time to come. Reliability shortfalls and development uncertainties in key Ford-class systems will prevent the ship from demonstrating its required sortie generation rate before initial deployments. Personnel accommodation restrictions resulting from the ship’s design has the potential of causing operational limitations that the Navy will have to manage closely—a constraint that does not exist in the Nimitz class. We previously recommended re-assessing these requirements; the Navy agreed that such an analysis is appropriate, but one that it would not pursue until the conclusion of operational testing. As we previously concluded, waiting until this point would be too late to make effective tradeoffs among cost, schedule, and performance for follow-on ships. As the Navy prepares to award the detail design and construction contract for the next Ford-class ship, CVN 79, it is clear that achieving the cost cap will be challenging. While the Navy and the shipbuilder are working to reduce costs, the Navy’s ability to achieve the congressional cost cap relies, in part, on deferring planned capability improvements until later maintenance periods. From an accountability and oversight standpoint, it would be preferable to keep the scope of the delivered ship constant—an essential component of a baseline—and raise the cost cap accordingly. The legislated cost cap for Ford-class aircraft carrier construction provides a limit on procurement funds. However, the legislation also provides for adjustments to the cost cap. To understand the true cost of each Ford-class ship, Congress should consider revising the cost cap legislation to ensure that all work included in the initial ship cost estimate that is deferred to post-delivery and outfitting account is counted against the cost cap. If warranted, the Navy would be required to seek statutory authority to increase the cap. We are not making any new recommendations, but our recommendations from our September 2013 report remain valid. We provided a draft of this report to DOD for comment. In its written comments, which are reprinted in appendix III, DOD agreed with much of the report but disagreed with our position on cost cap compliance. In particular, DOD disagreed that a change in cost cap legislation is necessary because it believes all procurement funds are counted toward the cost cap. While it is true that the current cost cap legislation does require the inclusion of all procurement funds, up to this point the Navy has not included funding for outfitting and post delivery costs in its end cost estimates. Further, the current legislation allows the Navy to make changes to the ships’ outfitting and post-delivery budget accounts without first seeking statutory authority. In the event that costs increase above the Navy’s current estimates, the Navy is considering deferring work until the post-delivery period and funding it through the outfitting and post delivery accounts, which would limit visibility into the ship’s true end cost. Our intention is not necessarily, as DOD states, to keep the post-delivery and procurement accounts separate, but rather to create a stable cost baseline for accountability and oversight purposes. DOD also disagreed with our conclusion that constructing CVN 79 within the current cost cap might not be achievable, but agreed that it will be challenging. DOD stated that the cost cap for CVN 79 is achievable largely due to the Navy’s two-phased acquisition approach, which is now intended to deliver the next carrier with the same capabilities as CVN 78. We agree that reducing the scope of CVN 79 prior to ship delivery should also reduce the cost estimate in the near term. As we noted in our report, however, the Navy initially included planned capability improvements in CVN 79’s baseline estimate. These improvements will now occur during a later maintenance period, the costs of which are to be shifted to other (non-CVN 79 shipbuilding) accounts at a later date. While the Navy’s approach to CVN 79’s cost estimate may initially appear to meet the cost cap, it serves to obscure the ship’s true cost. As we concluded in the report, from an accountability standpoint, it would be preferable to keep the scope of CVN 79 constant and raise the cost cap accordingly, if needed. In addition, DOD provided technical comments that were incorporated as appropriate. These comments included, among others, additional information on CVN 78’s shipboard test program and the Navy’s two- phased approach to constructing and delivering CVN 79. We are sending copies of this report to interested congressional committees, the Secretary of Defense, and the Secretary of the Navy. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. This report examines remaining risks in the CVN 78 program since September 2013 by assessing: (1) the extent to which CVN 78 will be delivered to the Navy within its revised cost and schedule goals; (2) if, after delivery, CVN 78 will demonstrate its required capabilities through testing before the ship is deployment ready; and (3) the steps the Navy is taking to achieve CVN 79 cost goals. To identify challenges in delivering the lead ship within current budget and schedule estimates, we reviewed Department of Defense (DOD) and contractor documents that address technology development efforts including test reports and program schedules and briefings. We also visited the lead ship of the Ford-class carriers, USS Gerald R. Ford (CVN 78), to observe construction progress and improve our understanding of the installation progress of the critical technologies aboard CVN 78. We evaluated Navy and contractor documents outlining cost and schedule parameters for CVN 78 Navy budget submissions, contract performance reports, quarterly performance reports, and program schedules and briefings. In addition, we reviewed the shipbuilder’s Earned Value Management data and developed our own cost and labor hour estimates at ship completion and compared this with data provided by the Navy and shipbuilder. We also relied on our prior work evaluating the Ford-class program and shipbuilding best practices to supplement the above analyses. To further corroborate documentary evidence and gather additional information in support of our review, we conducted interviews with relevant Navy and contractor officials responsible for managing the technology development and construction of CVN 78, such as the Program Executive Office, Aircraft Carriers; CVN 78 program office; Newport News Shipbuilding (a division of Huntington Ingalls Industries); Supervisor of Shipbuilding, Conversion, and Repair Newport News Command; Aircraft Launch and Recovery program office; and the Program Executive Office, Integrated Warfare Systems. We also held discussions with the Naval Sea Systems Command’s Cost Engineering and Industrial Analysis Division; the Defense Contract Management Agency; and the Defense Contract Audit Agency. To evaluate whether CVN 78 will demonstrate its required capabilities, we identified requirements criteria in the Future Aircraft Carrier Operational Requirements Document and compared requirements with reliability data and reliability growth projections for key systems. We also examined the CVN 78 preliminary ship’s manning document and wargame analysis of planned manning, as well as the Commander, Operational Test and Evaluation Force’s most recent operational assessment for the ship to identify potential manpower shortfalls. To evaluate whether the Navy’s post-delivery test and evaluation strategy will provide timely demonstration of required capabilities, we analyzed (1) development schedules and test reports for CVN 78 critical technologies; (2) testing reports and operational assessments for CVN 78; and (3) the Navy’s November 2013 revised test and evaluation master plan to identify concurrency among development, integration, and operational test plans. We corroborated documentary evidence by meeting with Navy and contractor officials responsible for developing key systems, managing ship testing, and conducting operational testing, including the Program Executive Office-Aircraft Carriers, the CVN 78 program office, Newport News Shipbuilding, the Aircraft Launch and Recovery program office, the Navy’s land-based test site for EMALS and AAG in Lakehurst, N.J., the Program Executive Office for Integrated Warfare Systems, Office of the Director, Operational Test and Evaluation, Office of the Deputy Assistant Secretary of Defense for Developmental Test and Evaluation, the Office of the Commander, Navy Operational Test and Evaluation Force, and the Office of the Chief of Naval Operations Air Warfare. To assess the steps the Navy is taking to achieve CVN 79 cost goals, we reviewed our prior work on Ford-class carriers; shipbuilder data identifying cost savings and labor hour reduction opportunities as well as lessons learned from constructing CVN 78; CVN 79 construction preparation contract and contract extensions; CVN 78 and CVN 79 labor hour data for completing advanced construction work; as well as, CVN 79 construction plans and reports, program briefings, and Navy budget submissions. We also conducted an analysis of the shipbuilder’s scheduling systems and processes that are used for constructing CVN 78 and assessed this against GAO’s scheduling best practices. We attempted to conduct a similar analysis of CVN 79’s schedule. However, the integrated master schedule used for construction—that is maintained by the shipbuilder—was not up to date and did not reflect the status of advanced construction work at the time of our analysis. As a result, we only reviewed the scheduling processes that the shipbuilder plans to use for CVN 79. To supplement our analysis and gain additional visibility into the Navy’s actions for ensuring CVN 79 is built within the constraints of the cost cap legislation, we reviewed several years of defense authorization acts and interviewed officials from the Program Executive Office-Aircraft Carriers, CVN 78 program office, CVN 79 and CVN 80 program office; Huntington Ingalls Industries, Newport News Shipbuilding; Supervisor of Shipbuilding, Conversion, and Repair Newport News Command; Program Executive Office, Integrated Warfare Systems; the Office of the Chief of Naval Operations Air Warfare Division; and, Naval Sea Systems Command’s Cost Engineering and Industrial Analysis Division. We conducted this performance audit from December 2013 to November 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. A number of new technologies are being installed on Ford-class aircraft carriers that are designed to increase the ship’s capability and lower life cycle costs. Below is an overview of these key technologies along with the approximate placement on the ship. In addition to the contact named above, key contributors to this report were Diana Moldafsky, Assistant Director; Christopher E. Kunitz; Brian P. Bothwell, Juana S. Collymore, Burns C. Eckert; Laura Greifner; John A. Krump; Jean L. McSween; Karen Richey; Jenny Shinn; and Oziel Trevino.
Ford-class aircraft carriers will feature new technologies designed to reduce life-cycle costs. The lead ship, CVN 78, has been under construction since 2008, and early construction on CVN 79 is underway. In 2007 Congress established a cap for procurement costs—which has been adjusted over time. In September 2013, GAO reported on a $2.3 billion increase in CVN 78 construction costs. GAO was mandated to examine risks in the CVN 78 program since its September 2013 report. This report assesses (1) the extent to which CVN 78 will be delivered within revised cost and schedule goals; (2) if CVN 78 will demonstrate its required capabilities before ship deployment; and (3) the steps the Navy is taking to achieve CVN 79 cost goals. To perform this work, GAO analyzed Navy and contractor data, and scheduling best practices. The extent to which the lead Ford-class ship, CVN 78, will be delivered by its current March 2016 delivery date and within the Navy's $12.9 billion estimate is dependent on the Navy's plan to defer work and costs to the post-delivery period. Lagging construction progress as well as ongoing issues with key technologies further exacerbate an already compressed schedule and create further cost and schedule risks. With the shipbuilder embarking on one of the most complex phases of construction with the greatest likelihood for cost growth, cost increases beyond the current $12.9 billion cost cap appear likely. In response, the Navy is deferring some work until after ship delivery to create a funding reserve to pay for any additional cost growth stemming from remaining construction risks. This strategy will result in the need for additional funding later, which the Navy plans to request through its post-delivery and outfitting budget account. However, this approach obscures visibility into the true cost of the ship and results in delivering a ship that is less complete than initially planned. CVN 78 will deploy without demonstrating full operational capabilities because it cannot achieve certain key requirements according to its current test schedule. Key requirements—such as increasing aircraft launch and recovery rates—will likely not be met before the ship is deployment ready and could limit ship operations. Further, CVN 78 will not meet a requirement that allows for increases to the size of the crew over the service life of the ship. In fact, the ship may not even be able to accommodate the likely need for additional crew to operate the ship without operational tradeoffs. Since GAO's last report in September 2013, post-delivery plans to test CVN 78's capabilities have become more compressed, further increasing the likelihood that CVN 78 will not deploy as scheduled or will deploy without fully tested systems. The Navy is implementing steps to achieve the $11.5 billion congressional cost cap for the second ship, CVN 79, but these are largely based on ambitious efficiency gains and reducing a significant amount of construction, installation, and testing—work traditionally completed prior to ship delivery. Since GAO last reported in September 2013, the Navy extended CVN 79's construction preparation contract to allow additional time for the shipbuilder to reduce cost risks and incorporate lessons learned from construction of CVN 78. At the same time, the Navy continues to revise its acquisition strategy for CVN 79 in an effort to ensure that costs do not exceed the cost cap, by postponing installation of some systems until after ship delivery, and deferring an estimated $200 million - $250 million in previously planned capability upgrades of the ship's combat systems to be completed well after the ship is operational. Further, if CVN 79 construction costs should grow above the legislated cost cap, the Navy may choose to use funding intended for work to complete the ship after delivery to cover construction cost increases. As with CVN 78, the Navy could choose to request additional funding through post-delivery budget accounts not included in calculating the ship's end cost. Navy officials view this as an approach to managing the cost cap. However, doing so impairs accountability for actual ship costs. Congress should consider revising the cost cap legislation to improve accountability of Ford-class construction costs, by requiring that all work included in the initial ship cost estimate is counted against the cost cap. If warranted, the Navy would be required to seek statutory authority to increase the cap. GAO is not making new recommendations, but believes previous recommendations, including a re-examination of requirements and improvements to the test plan, remain valid. DOD agreed with much of the report, but disagreed with GAO's position on the cost caps. GAO believes that changes to the legislation are warranted to improve cost accountability.
NASA and its international partners—Japan, Canada, the European Space Agency, and Russia—are building the space station as a permanently orbiting laboratory to conduct materials and life sciences research, earth observation and commercial utilization, and related uses under nearly weightless conditions. Each partner is providing station hardware and crew members and is expected to share operating costs and use of the station. The NASA space station program manager is responsible for the cost, schedule, and technical performance of the total program. The Boeing Corporation, the prime contractor, is responsible for development, integration, and on-orbit performance of the station. By the end of 1997, the United States and its partners had produced well over 358,000 pounds of space flight hardware, of which the prime contractor was responsible for about 260,000 pounds. According to NASA, by the end of 1998, virtually all flight hardware for the first six flights will have been delivered to Russian or American launch sites. In June 1995, we reported that the U.S. funds required to design, launch, and operate the space station would be about $94 billion—over $48 billion to complete assembly and almost $46 billion to operate and conduct research. That total included $17.4 billion for station development activities, $13 billion for operations, and $50.5 billion for shuttle launch support during assembly and operations. Our report also noted that the program’s funding reserves were limited and that the launch and assembly schedule would be difficult to achieve. Since June 1995, total space station cost estimates have increased from $93.9 billion to $95.6 billion (see table 1). In particular, the development cost estimate has increased by more than 20 percent, in-house personnel requirements have increased dramatically, and eight shuttle flights have been added to the development program. However, the shuttle support cost, as of April 1998, is less than that of June 1995 because NASA is projecting a significant reduction in the average cost per flight. The higher development costs—$21.9 billion versus $17.4 billion—are attributable to schedule delays, additional prime contractor effort not covered by funding reserves, additional crew return vehicle costs, and costs incurred as a result of delays in the Russian-made Service Module. In June 1995, NASA expected to complete assembly in June 2002. Partially due to delays in the Russian program, the last flight in the assembly sequence is now scheduled for December 2003, a delay of 18 months that has increased development costs by more than $2 billion. Also, NASA has undertaken activities such as developing the Interim Control Module to mitigate delays in the delivery of the Service Module. These activities are estimated by NASA to cost more than $200 million. It should be noted that our estimate includes the cost of the Russian Space Agency contract, which NASA does not include in its portrayal of station development funding needs. The increased in-house personnel costs during development—$2.2 billion versus $0.9 billion—are attributable to a longer development program, higher estimated personnel levels, and a more inclusive estimating methodology. Our June 1995 estimate was based on a development program scheduled to end in June 2002 while our current estimate includes an additional 18 months of effort. In addition, our prior estimate was based on an average of 1,285 civil service staff annually. NASA’s budget now estimates that about 2,000 staff per year will be needed during development. The increased staffing levels are attributable largely to the inclusion of science and crew return vehicle personnel into the station budget, which in most cases were previously covered under the Science, Aeronautics and Technology budgets. Finally, our current estimate is based on an allocation of all research and program management costs to the station program, while the previous estimate did not include all components of that budget line. Regarding shuttle support, our 1995 estimate was based on 35 flights during development and 50 during operations. However, NASA now estimates 43 flights during development, including 2 additional flights to the Russian space station Mir, 1 flight to test the crew return vehicle, and flights required by changes to the assembly sequence. NASA continues to estimate that 50 flights will be needed during operations. However, NASA’s estimate of average cost per flight is now lower, resulting in a shuttle launch support cost of $17.7 billion during assembly, essentially the same cost as estimated in 1995, despite the increased number of flights. During operations, the estimated cost for shuttle support is now significantly less—$25.6 billion versus $32.7 billion—based on the same number of flights. NASA’s estimated reduction in the average cost per flight is based on its expectation that program efficiencies and other cost savings will be achieved and sustained throughout the operating life of the space station. If that expectation is not realized, the cost for shuttle support will increase. A number of potential program changes could significantly increase the current estimate. First, the development costs shown in table 1 would increase if the assembly complete milestone slips beyond December 2003. Second, it is likely that the program will ultimately require more shuttle flights than are included in our analysis. Finally, NASA is now considering modifying space shuttle Columbia to permit its use for some station missions. A recent independent assessment by NASA’s Cost Assessment and Validation Task Force suggests that the program’s schedule will likely experience further delays and require additional funding. We believe NASA and its partners face a formidable challenge in meeting the launch schedules necessary to complete assembly. Those schedules depend on the launch capacity in the United States and Russia and the program’s ability to meet all manufacturing, testing, and software and hardware integration deadlines. Through December 2003, over 90 launches by NASA and its international partners will be needed for assembly, science utilization, resupply, and crew return vehicle purposes. During this period, NASA’s shuttles are currently scheduled to be flown up to 9 times a year for both station and nonstation needs, and Russia will have to average 9 to 10 launches a year to accommodate its station commitment. While these rates have been achieved in the past, a January 1998 NASA study of personnel reductions at Kennedy Space Center concluded that, without additional processing efficiencies, the required shuttle flight rate may not be supportable. If NASA is unable to maintain the planned flight rate, the station assembly schedule could experience further slippage. Also, recent Russian annual flight rates to support the Mir space station have been significantly lower than the required rate to support space station assembly. The assembly schedule also assumes that further critical manufacturing delays will not occur. According to NASA’s Aerospace Safety Advisory Panel’s 1997 annual report, the program’s schedule is at risk due to software, hardware, and testing issues. The report states, in part, that the “. . . software development schedule is almost impossibly tight. If something else does not cause a further delay in (station) deployment, software development may very well do so.” Further, the report pointed out that the crew return vehicle development schedule is “extremely optimistic,” noting that any delays in the availability of the vehicle could constrain station operations. In addition, the panel stated that, while integrated testing is a “very positive step for safety,” there is no room in the current schedule for required changes that may be discovered during this testing. Delays in the development program would increase costs because, at a minimum, fixed costs such as salaries, contractor overhead, and sustaining engineering would continue for a longer period than planned. Assuming NASA would continue to spend at the rate assumed in its current estimate for fiscal year 2003, the program would incur additional costs of more than $100 million for every month of schedule slippage. The program could require more shuttle flights than are baselined in our estimate. For example, the baseline does not include additional flights that may be needed for crew return vehicle testing and launches and some resupply flights. While some of these possibilities are subject to program changes that have not been adopted, it appears that the costs associated with launching the crew return vehicle are not included. Depending on the ultimate life expectancy of that vehicle, two additional flights could be needed. On the basis of NASA’s estimate of average cost per flight for the shuttle, this could add about $1 billion to the total estimate. According to NASA, sustaining engineering costs associated with the crew return vehicle will have to be absorbed by the program’s operations budget. Also, NASA is reviewing alternatives for making Columbia capable of supporting the station. A modified Columbia could be used as a backup (in the event one of the other orbiters is out of service) or as a delivery vehicle for cargo. Between November 1997 and April 1998, an independent cost assessment and validation team examined the program’s past and projected performance and made quantitative determinations regarding the potential for additional cost and schedule growth. Reflecting many of the same areas we identified, the team cited complex assembly requirements and potential schedule problems associated with remaining hardware and software development and concluded that the program could require an additional $130 million to $250 million in annual funding. The team also indicated that the program could experience 1 to 3 years of schedule growth beyond the currently anticipated completion date of December 2003. The estimate we derived in 1995 and our latest estimate include those costs related to the space station’s development, assembly, and operations. They do not include potential costs that may be incurred to satisfy NASA’s space debris tracking requirement. Due to its large size and long operational lifetime, the space station will face a risk of being struck by orbital debris. NASA plans to provide shielding against smaller objects and maneuver the station to avoid collisions with large objects. The National Space Policy requires NASA to ensure the safety of all space flight missions involving the space station and shuttle, including protection against the threat of collisions from orbiting space debris. However, NASA has no surveillance capability and must rely on the Department of Defense (DOD) to perform this function. As mentioned previously, NASA updated its overall requirement for space debris tracking as it relates to supporting the space station, to include the ability to track and catalog objects as small as 1 centimeter. NASA recognized that such a capability could require sensor facility upgrades and the addition of new sensors to DOD’s surveillance network. However, DOD maintains that the upgrade is not feasible within current budget constraints. A NASA study suggested that developing a system to satisfy NASA’s needs could cost about $1 billion. A DOD study suggested that the cost of a space-based system satisfying all DOD and NASA needs could exceed $5 billion and noted that the cost to maintain a system that provides 24-hour a day tracking of 1-centimeter-sized space debris could be “prohibitively expensive.” More recently, the Senate Committee on Armed Services, in its report on the National Defense Authorization Act for Fiscal Year 1998, directed the Secretary of the Air Force to undertake a design study for a 1-centimeter debris tracking system. The study was to be coordinated with a number of national laboratories. The resulting report, which was transmitted to congressional committees on April 2, 1998, identified three possible designs that range in estimated cost from about $400 million to $2.5 billion. The sources of funding for the system are undetermined at this time. Also, while the more stringent requirement is related to the space station, all other space activities would benefit from the ability to track 1-centimeter-sized debris. Since debris tracking is a NASA-wide requirement, and the agency relies on DOD to provide the service, the two agencies will have to work together to determine how to provide the capability. We have previously expressed our concern with the adequacy of space station financial reserves. We continue to be concerned. The program has used, or identified specific uses for a significant portion of its available reserves, with almost 6 years left before the last assembly flight is scheduled to be launched. In January 1995, the space station program had more than $3 billion in financial reserves to cover development contingencies. In March 1998, the financial reserves available to the program were down to about $2.1 billion, and NASA had identified over $1 billion in potential funding requirements against those reserves. In the past, reserves have been used to fund additional requirements, overruns, and other authorized changes. Some of the potential funding needs include those related to NASA’s decision to add a third node to the station’s design and unforeseen costs associated with the development of an Interim Control Module. We recognize that NASA identifies adequacy of reserves as one of the highest current program risks. We also note that the current reserve status could be affected by additional schedule slips, contract disputes, manufacturing problems, or the need for additional testing. Inadequate reserves hinder program managers’ ability to cope with unanticipated problems. If a problem could not be covered by available reserves, program managers could be faced with deferring or rephasing other activities, thus possibly delaying the space station’s development schedule or increasing future costs. In the summer of 1997, after many months of estimating that the total cost growth at the completion of the contract would not exceed $278 million, Boeing more than doubled its estimate—to $600 million. Through September 1997, $398 million in cost growth had already accumulated. On September 30, 1997, Boeing formally asked NASA to consider rebaselining the program using a more “meaningful program baseline against which performance measurements (could) be taken.” In October 1997, NASA granted approval to Boeing to begin tracking cost and schedule performance using a new performance measurement baseline. The revised baseline permitted Boeing to reset its budgeted cost of work scheduled and performed equal to the actual cost of work performed as of September 1997. According to Boeing, this change provides the program with the most accurate cost information and incorporates updated program schedules to reflect the most achievable recovery plans. For reporting purposes, the change had the effect of resetting cost and schedule variances to zero. We asked the program officials to provide us with an analysis depicting a crosswalk back to the original baseline. That analysis shows that, as of February 1998, the total variance was $448 million. Of that amount, about $50 million was incurred in the first 5 months of fiscal year 1998. While NASA approved the new baseline for reporting purposes, it continues to use Boeing’s estimate of overrun at completion—$600 million—as the basis for calculating the contractor’s incentive award fee. NASA’s estimate of total cost growth at completion, which had been in general accord with Boeing’s $600 million estimate, has been increased to $817 million, and is the basis for its fiscal year 1999 budget request. This higher estimate is based on its assessment of trends and its belief that Boeing’s cost control strategy will not be fully successful. Since our last cost estimate was completed in June 1995, U.S. life-cycle funding requirements for building and operating the International Space Station have increased—from $93.9 billion to $95.6 billion. Many of the reasons for this increase were not foreseen by NASA in 1995. Reasons include schedule delays by Russia and prime contractor difficulties. In light of our analysis and that by an independent team, additional costs could materialize. Potential program changes, such as additional schedule slippage and more shuttle flights, could increase our latest cost estimate. Also, NASA’s updated requirement for tracking space debris may require DOD to upgrade its surveillance network. NASA’s potential share of this cost has not yet been determined. When the station is fully assembled, funding requirements for operational activities, such as shuttle launches, the crew return vehicle, principal investigator work, and in-house personnel support, will need to be fully defined. During the station’s projected 10-year utilization period, U.S. funding requirements are estimated to total over $42 billion, or about an average of $4.2 billion per year. Therefore, station-related funding needs will continue be a major portion of NASA’s future budgets. In commenting on a draft of this report, NASA raised three major concerns: (1) our use of average cost per flight to estimate shuttle launch support costs, (2) the inclusion of certain program costs in the station development estimate, and (3) the inclusion of references to the requirement for improved orbital debris tracking capability. NASA also provided a number of technical and clarifying comments, which have been incorporated where appropriate. NASA believes that marginal cost, rather than average cost per flight, is a more accurate estimate of shuttle launch support costs. NASA defines marginal cost per flight as those costs incurred or avoided as a result of adding or deleting one flight to or from the shuttle manifest in a given fiscal year. Marginal cost does not include any fixed costs that NASA says are required to maintain the capability to launch the shuttle a specific number of times during a given year. Average cost per flight as defined by NASA is the total cost to operate the space shuttle on a recurring and sustained basis for a given fiscal year divided by the number of flights planned for that year. Its calculation of average cost per flight captures most costs in the shuttle operations budget line, as well as prorations of civil service personnel, space communications network costs, and recurring costs for shuttle improvements. We believe our use of average cost per flight is appropriate because more than 70 percent of shuttle flights during fiscal years 1999 through 2003 will be devoted to the space station. NASA expressed concern with our inclusion of certain costs in the development estimate, particularly the Russian Space Agency contract cost. We chose to include all costs that we believe directly support station development and construction activities to more completely portray that portion of the life-cycle cost estimate. However, we revised the report to recognize the way NASA treats those costs. NASA also expressed concern that our discussion of the costs associated with orbital debris tracking could be misunderstood. We believe our discussion is clear. We agree that debris tracking costs should not be considered part of the space station’s life-cycle cost estimate, and benefits would accrue to programs other than the space station. However, it is a potential cost that is related to space station support because the requirement to track and catalog 1-centimeter-sized debris was established to support the station. As stated in the report, since debris tracking is a NASA-wide responsibility and the agency relies on DOD to provide the service, the two agencies will have to work together to achieve the improved capability. We provide additional details on NASA’s comments in appendix I. To estimate station costs, identify program uncertainties, examine program reserves, and assess the prime contractor’s cost and schedule reporting system, we reviewed NASA’s program planning and budgeting documents, internal cost reports, independent program assessments, and contracts relating to space station development. We interviewed NASA officials in the Space Station Program Office, the Space Shuttle Program Office, the Office of Human Space Flight, the Office of Life and Microgravity Sciences and Applications, the Office of the Comptroller, and the X-38 development program. We also met with officials from NASA’s space station Cost Assessment and Validation Task Force to discuss the scope and results of their work, and the National Research Council to discuss ongoing work related to station disposal. To examine potential impacts of satisfying NASA’s debris tracking requirement, we discussed a recent Air Force study with cognizant officials and reviewed previous debris tracking studies. We used NASA budget data to depict certain costs and to derive other costs. We used cost reports and independent assessments to test the reliability of NASA’s estimates and to identify cost risks to the program. We did not, however, attempt to independently validate NASA’s budget data. We performed our work from December 1997 to April 1998 in accordance with generally accepted government auditing standards. Unless you publicly announce its contents earlier, we plan no further distribution of this report until 15 days from its issue date. At that time, we will send copies to appropriate congressional committees, the NASA Administrator, and the Director of the Office of Management and Budget. We will also make copies available to others on request. Please contact me at (202) 512-4841 if you or your staff have any questions about this report. Major contributors to this report are listed in appendix II. The following are GAO’s comments on the National Aeronautics and Space Administration’s (NASA) letter dated April 27, 1998. 1. According to NASA, shuttle support costs for the space station would be $3.1 billion during development and $5.5 billion during operations if marginal cost per flight is used to estimate those costs. However, we believe that it is more appropriate to use average cost per flight to estimate shuttle support. NASA defines marginal cost per flight as those costs incurred or avoided as a result of adding or deleting one flight to or from the shuttle manifest in a given fiscal year. Marginal cost does not include any fixed costs that NASA says are required to maintain the capability to launch the shuttle a specific number of times during a given year. According to NASA officials, eliminating or adding a single flight in a given year has no effect on these fixed costs. Marginal cost per flight includes costs of personnel and any consumable hardware and materials, such as propellant, that can be added or removed with only temporary adjustment in the flight rate. NASA defines average cost per flight as the total cost to operate the space shuttle on a recurring and sustained basis for a given fiscal year divided by the number of flights planned for that year. Its calculation of average cost per flight captures most costs in the shuttle operations budget line, as well as prorations of civil service personnel, space communications network costs, and recurring costs for shuttle improvements. The calculation does not include capital-type costs, such as those required to develop the system, and construct and modify government-owned facilities or nonrecurring costs associated with system improvements. During its assembly, station elements will be almost the exclusive payload on the shuttle, and there is no alternative means of transportation for the station. Also, during the operations period, the station will be a major user of the shuttle. Since the station will be the predominant user of the shuttle for many years, we believe the use of average cost per flight is more appropriate than the use of marginal cost per flight to estimate shuttle launch support costs. 2. The time frames for the cost estimates were clearly portrayed in the life-cycle cost table. We added a footnote in the Results in Brief section to cite those dates earlier in the report. 3. We changed the heading in the table from “development budget” to “development cost”. We chose to aggregate all costs related directly to space station development and construction. 4. We revised the report to refer to earth observation and commercial utilization and related uses. 5. We revised the report to read “. . . development, integration, and on-orbit performance.” 6. We recognize that we have included some costs in our development total that were not included in 1995, such as the Russian Space Agency contract and crew return vehicle development costs. In calculating the percentage increase, we excluded those costs from our total in order to make a proper comparison. Using NASA’s own figures, the increase is more than 22 percent—$17.4 billion vs. $21.3 billion. 7. We recognize that the NASA Administrator initiated the idea of conducting an independent cost review. However, we note that the Congress specifically requested such an analysis in Conference Report 105-297. The report specified a number of preconditions to the release of some space station funding. One of those requirements was “a detailed analysis by a third party of (space station) cost and schedule projections . . .” For brevity, we have deleted references to this sequence of events. 8. We agree that debris tracking costs should not be considered part of the space station’s life-cycle cost estimate. We believe we have made that clear by (1) excluding any reference to debris tracking from the life-cycle cost table and (2) stating that debris tracking is a NASA-wide responsibility. However, we believe it is important to identify this potential cost because NASA established the requirement to catalog and track objects as small as 1 centimeter, in part, to support the International Space Station, and funding to achieve that capability is not yet available. As stated in the report, since debris tracking is a NASA-wide responsibility and the agency relies on the Department of Defense to provide the service, the two agencies will have to work together to determine how to move ahead on this challenge. 9. We do not imply that the program has spent $2 billion of reserves. However, according to program documentation, the net unencumbered reserve posture, as of March 1998, was about $1.1 billion. This compared with a starting point of about $3.1 billion in January 1995. 10. We believe the sentence, as written, accurately reflects the status of cost variance under the prime contract. 11. We revised our terminology. 12. We changed the life-cycle cost table category to read “development cost from 1994 to assembly complete” and added language in the report narrative to recognize NASA’s position. We note that in testimony on April 23, 1998, the NASA Administrator pointed out the relevance of the activities under the Russian contract to the development and construction of the space station. 13. The shuttle was incapable of supporting space station assembly without incorporating certain enhancements. We believe these nonrecurring costs are completely relevant to the discussion of space station life-cycle cost estimates. 14. We changed the footnote to read “U.S. missions to . . . Mir.” 15. We changed the footnote to read “Russian Space Agency contract.” 16. We did not change the order of reasons for contract growth. See comment 12 for discussion of Russian Space Agency contract. 17. Our estimate of civil service personnel costs includes an allocation of all elements of the research and program management budget—personnel and related costs, travel, and research operations support—to the station program. According to a NASA official, the agency’s estimate only allocates personnel and related costs to the station program. Since the station program benefits from all elements of the research and program management budget, we believe that it is appropriate to allocate all of those costs to the program. 18. We modified the report to incorporate this suggestion. 19. A crew return vehicle is required for space station operations. The X-38 program is focused on demonstrating a concept for station crew return. Therefore, we believe those costs are directly related to station development. 20. We changed the report to reflect NASA’s current plans for modifying space shuttle Columbia. 21. We modified the report to read “. . . Over 90 launches by NASA and its international partners.” 22. We disagree. We believe a “delay” in the seven person operational capability is a constraint to the station program. 23. See comment 20. 24. We revised the report to reflect information in the final Cost Assessment and Validation Task Force report. 25. See comment 8. 26. See comment 9. 27. We believe report language accurately reflects the rebaselining of the prime contract performance measurement reporting system. 28. We modified the report to incorporate NASA’s suggestion. 29. We modified the report to incorporate NASA’s suggestion. 30. We identified the independent cost team as NASA’s Cost Assessment and Validation Task Force earlier in the report. 31. See comment 8. Vijay Barnabas The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed issues associated with the National Aeronautics and Space Administration's (NASA) International Space Station program, focusing on: (1) estimates of the station's development, assembly, and operations costs and comparing this estimate with the estimate in GAO's June 1995 report; (2) program uncertainties that may affect those costs; (3) potential debris tracking costs; (4) the status of program reserves; and (5) recent actions to measure prime contractor performance based on rebaselined information. GAO noted that: (1) life-cycle cost is the sum total of direct, indirect, recurring, and nonrecurring cost of a system over its entire life through disposal; (2) overall, the estimated U.S. cost to develop, assemble, and operate the space station is about $96 billion, an increase of almost $2 billion over GAO's last estimate made in 1995; (3) development costs represent the largest increase--more than 20 percent; (4) the development increase is attributable to schedule slippages, prime contract growth, additional crew return vehicle costs, and the effects of delays in delivery of the Russian-made Service Module; (5) overall costs would have been significantly higher had there not been an offsetting reduction in shuttle support costs; (6) a number of potential program changes could significantly increase the updated cost estimate; (7) they include the potential or additional schedule slippage and the need for shuttle launches to test and deliver the crew return vehicle; (8) at the current estimated spending rate, the program would incur additional costs of more than $100 million for every month of schedule slippage; (9) in addition, NASA may have to incur costs related to protecting the station from space debris; (10) in August 1997, the agency updated its overall space debris tracking requirement; (11) the new requirement, as it relates to supporting the space station, includes the ability to track and catalog objects as small as 1 centimeter; (12) the adequacy of the space station program's funding reserves has been a concern of GAO's; (13) the program has used, or identified potential uses for, a significant portion of its available reserves, with almost 6 years left before the last assembly flight is scheduled to be launched; (14) in October 1997, NASA granted approval to Boeing Corporation to begin tracking cost and schedule performance using a new performance measurement baseline; (15) the purpose of the change was to incorporate updated program schedules to reflect the most achievable recovery plans; (16) for reporting purposes, the change had the effect of resetting cost and schedule variances to zero; (17) the original baseline shows that the February 1998 cost variance would have been about $50 million higher than the $398 million Boeing reported prior to the change; and (18) while NASA approved the new baseline for reporting purposes, it continues to use Boeing's estimate of overrun at completion--$600 million--as the basis for calculating the contractor's incentive award fee.
Interest in oil shale as a domestic energy source has waxed and waned since the early 1900s. More recently, the Energy Policy Act of 2005 directed BLM to lease its lands for oil shale research and development. In June 2005, BLM initiated a leasing program for research, development, and demonstration (RD&D) of oil shale recovery technologies. By early 2007, it granted six small RD&D leases: five in the Piceance Basin of northwest Colorado and one in Uintah Basin of northeast Utah. The leases are for a 10-year period, and if the technologies are proven commercially viable, the lessees can significantly expand the size of the leases for commercial production into adjacent areas known as preference right lease areas. The Energy Policy Act of 2005 also directed BLM to develop a programmatic environmental impact statement (PEIS) for a commercial oil shale leasing program. During the drafting of the PEIS, however, BLM realized that, without proven commercial technologies, it could not adequately assess the environmental impacts of oil shale development and dropped from consideration the decision to offer additional specific parcels for lease. Instead, the PEIS analyzed making lands available for potential leasing and allowing industry to express interest in lands to be leased. Environmental groups then filed lawsuits, challenging various aspects of the PEIS and the RD&D program. Since then, BLM has initiated another round of oil shale RD&D leasing. Stakeholders in the future development of oil shale are numerous and include the federal government, state government agencies, the oil shale industry, academic institutions, environmental groups, and private citizens. Among federal agencies, BLM manages the land and the oil shale beneath it and develops regulations for its development. USGS describes the nature and extent of oil shale deposits and collects and disseminates information on the nation’s water resources. DOE, through its various offices, national laboratories, and arrangements with universities, advances energy technologies, including oil shale technology. The Environmental Protection Agency (EPA) sets standards for pollutants that could be released by oil shale development and reviews environmental impact statements, such as the PEIS. Interior’s Bureau of Reclamation (BOR) manages federally built water projects that store and distribute water in 17 western states and provides this water to users. BOR monitors the amount of water in storage and the amount of water flowing in the major streams and rivers, including the Colorado River, which flows through oil shale country and feeds these projects. BOR provides its monitoring data to federal and state agencies that are parties to three major federal, state, and international agreements that together with other federal laws, court decisions, and agreements, govern how water within the Colorado River and its tributaries is to be shared with Mexico and among the states in which the river or its tributaries are located. The states of Colorado and Utah have regulatory responsibilities over various activities that occur during oil shale development, including activities that impact water. Through authority delegated by EPA under the Clean Water Act, Colorado and Utah regulate discharges into surface waters. Colorado and Utah also have authority over the use of most water resources within their respective state boundaries. They have established extensive legal and administrative systems for the orderly use of water resources, granting water rights to individuals and groups. Water rights in these states are not automatically attached to the land upon which the water is located. Instead, companies or individuals must apply to the state for a water right and specify the amount of water to be used, its intended use, and the specific point from where the water will be diverted for use, such as a specific point on a river or stream. Utah approves the application for a water right through an administrative process, and Colorado approves the application for a water right through a court proceeding. The date of the application establishes its priority—earlier applicants have preferential entitlement to water over later applicants if water availability decreases during a drought. These earlier applicants are said to have senior water rights. When an applicant puts a water right to beneficial use, it is referred to as an absolute water right. Until the water is used, however, the applicant is said to have a conditional water right. Even if the applicant has not yet put the water to use, such as when the applicant is waiting on the construction of a reservoir, the date of the application still establishes priority. Water rights in both Colorado and Utah can be bought and sold, and strong demand for water in these western states facilitates their sale. A significant challenge to the development of oil shale lies in the current technology to economically extract oil from oil shale. To extract the oil, the rock needs to be heated to very high temperatures—ranging from about 650 to 1,000 degrees Fahrenheit—in a process known as retorting. Retorting can be accomplished primarily by two methods. One method involves mining the oil shale, bringing it to the surface, and heating it in a vessel known as a retort. Mining oil shale and retorting it has been demonstrated in the United States and is currently done to a limited extent in Estonia, China, and Brazil. However, a commercial mining operation with surface retorts has never been developed in the United States because the oil it produces competes directly with conventional crude oil, which historically has been less expensive to produce. The other method, known as an in-situ process, involves drilling holes into the oil shale, inserting heaters to heat the rock, and then collecting the oil as it is freed from the rock. Some in-situ technologies have been demonstrated on very small scales, but other technologies have yet to be proven, and none has been shown to be economically or environmentally viable. Nevertheless, according to some energy experts, the key to developing our country’s oil shale is the development of an in-situ process because most of the richest oil shale is buried beneath hundreds to thousands of feet of rock, making mining difficult or impossible. Additional economic challenges include transporting the oil produced from oil shale to refineries because pipelines and major highways are not prolific in the remote areas where the oil shale is located, and the large-scale infrastructure that would be needed to supply power to heat oil shale is lacking. In addition, average crude oil prices have been lower than the threshold necessary to make oil shale development profitable over time. Large-scale oil shale development also brings socioeconomic impacts. There are obvious positive impacts such as the creation of jobs, increase in wealth, and tax and royalty payments to governments, but there are also negative impacts to local communities. Oil shale development can bring a sizeable influx of workers, who along with their families, put additional stress on local infrastructure such as roads, housing, municipal water systems, and schools. Development from expansion of extractive industries, such as oil shale or oil and gas, has typically followed a “boom and bust” cycle in the West, making planning for growth difficult. Furthermore, traditional rural uses could be replaced by the industrial development of the landscape, and tourism that relies on natural resources, such as hunting, fishing, and wildlife viewing, could be negatively impacted. Developing oil shale resources also faces significant environmental challenges. For example, construction and mining activities can temporarily degrade air quality in local areas. There can also be long- term regional increases in air pollutants from oil shale processing, upgrading, pipelines, and the generation of additional electricity. Pollutants, such as dust, nitrogen oxides, and sulfur dioxide, can contribute to the formation of regional haze that can affect adjacent wilderness areas, national parks, and national monuments, which can have very strict air quality standards. Because oil shale operations clear large surface areas of topsoil and vegetation, some wildlife habitat will be lost. Important species likely to be negatively impacted from loss of wildlife habitat include mule deer, elk, sage grouse, and raptors. Noise from oil shale operations, access roads, transmission lines, and pipelines can further disturb wildlife and fragment their habitat. Environmental impacts could be compounded by the impacts of coal mining, construction, and extensive oil and gas development in the area. Air quality and wildlife habitat appear to be particularly susceptible to the cumulative effect of these impacts, and according to some environmental experts, air quality impacts may be the limiting factor for the development of a large oil shale industry in the future. Lastly, the withdrawal of large quantities of surface water for oil shale operations could negatively impact aquatic life downstream of the oil shale development. My testimony today will discuss impacts to water resources in more detail. In our October report, we found that oil shale development could have significant impacts on the quantity and quality of surface and groundwater resources, but the magnitude of these impacts is unknown. For example, we found that it is not possible to quantify impacts on water resources with reasonable certainty because it is not yet possible to predict how large an oil shale industry may develop. The size of the industry would have a direct relationship to water impacts. We noted that, according to BLM, the level and degree of the potential impacts of oil shale development cannot be quantified because this would require making many speculative assumptions regarding the potential of the oil shale, unproven technologies, project size, and production levels. Hydrologists and engineers, while not able to quantify the impacts from oil shale development, have been able to determine the qualitative nature of its impacts because other types of mining, construction, and oil and gas development cause disturbances similar to impacts that would be expected from oil shale development. According to these experts, in the absence of effective mitigation measures, impacts from oil shale development to water resources could result from disturbing the ground surface during the construction of roads and production facilities, withdrawing water from streams and aquifers for oil shale operations, underground mining and extraction, and discharging waste waters from oil shale operations. For example, we reported that oil shale operations need water for a number of activities, including mining, constructing facilities, drilling wells, generating electricity for operations, and reclamation of disturbed sites. Water for most of these activities is likely to come from nearby streams and rivers because it is more easily accessible and less costly to obtain than groundwater. Withdrawing water from streams and rivers would decrease flows downstream and could temporarily degrade downstream water quality by depositing sediment within the stream channels as flows decrease. The resulting decrease in water would also make the stream or river more susceptible to temperature changes—increases in the summer and decreases in the winter. These elevated temperatures could have adverse impacts on aquatic life, which need specific temperatures for proper reproduction and development and could also decrease dissolved oxygen, which is needed by aquatic animals. We also reported that both underground mining and in-situ operations would permanently impact aquifers. For example, underground mining would permanently alter the properties of the zones that are mined, thereby affecting groundwater flow through these zones. The process of removing oil shale from underground mines would create large tunnels from which water would need to be removed during mining operations. The removal of this water through pumping would decrease water levels in shallow aquifers and decrease flows to streams and springs that are connected. When mining operations cease, the tunnels would most likely be filled with waste rock, which would have a higher degree of porosity and permeability than the original oil shale that was removed. Groundwater flow through this material would increase permanently, and the direction and pattern of flows could change permanently. Similarly, in- situ extraction would also permanently alter aquifers because it would heat the rock to temperatures that transform the solid organic compounds within the rock into liquid hydrocarbons and gas that would fracture the rock upon escape. The long-term effects of groundwater flows through these retorted zones are unknown. Some in-situ operations envision using a barrier to isolate thick zones of oil shale with intervening aquifers from any adjacent aquifers and pumping out all the groundwater from this isolated area before retorting. The discharge of waste waters from operations would also temporarily increase water flows in receiving streams. These discharges could also decrease the quality of downstream water if the discharged water is of lower quality, has a higher temperature, or contains less oxygen. Lower- quality water containing toxic substances could increase fish and invertebrate mortality. Also, increased flow into receiving streams could cause downstream erosion. However, if companies recycle waste water and water produced during operations, these discharges and their impacts could be substantially reduced. Commercial oil shale development requires water for numerous activities throughout its life cycle; however, we found that estimates vary widely for the amount of water needed to produce oil shale. These variations stem primarily from the uncertainty associated with reclamation technologies for in-situ oil shale development and because of the various ways to generate power for oil shale operations, which use different amounts of water. In our October report, we stated that water is needed for five distinct groups of activities that occur during the life cycle of oil shale development: (1) extraction and retorting, (2) upgrading of shale oil, (3) reclamation, (4) power generation, and (5) population growth associated with oil shale development. However, we found that few studies that we examined included estimates for the amount of water used by each of these activities. Consequently, we calculated estimates of the minimum, maximum, and average amounts of water that could be needed for each of the five groups of activities that comprise the life cycle of oil shale development. Based on our calculations, we estimated that about 1 to 12 barrels of water could be needed for each barrel of oil produced from in- situ operations, with an average of about 5 barrels (see table 1); and about 2 to 4 barrels of water could be needed for each barrel of oil produced from mining operations with a surface retort operation, with an average of about 3 barrels (see table 2). In October 2010, we reported that water is likely to be available for the initial development of an oil shale industry, but the eventual size of the industry may be limited by the availability of water and demands for water to meet other needs. Oil shale companies operating in Colorado and Utah will need to have water rights to develop oil shale, and representatives from all of the companies with whom we spoke were confident that they held at least enough water rights for their initial projects and will likely be able to purchase more rights in the future. According to a study by the Western Resource Advocates, a nonprofit environmental law and policy organization, of water rights ownership in the Colorado and White River Basins of Colorado companies have significant water rights in the area. For example, the study found that Shell owns three conditional water rights for a combined diversion of about 600 cubic feet per second from the White River and one of its tributaries and has conditional rights for the combined storage of about 145,000 acre-feet in two proposed nearby reservoirs. In addition to exercising existing water rights and agreements, there are other options for companies to obtain more water rights in the future, according to state officials in Colorado and Utah. In Colorado, companies can apply for additional water rights in the Piceance Basin on the Yampa and White Rivers. For example, Shell recently applied—but subsequently withdrew the application—for conditional rights to divert up to 375 cubic feet per second from the Yampa River for storage in a proposed reservoir that would hold up to 45,000 acre-feet for future oil shale development. In Utah, however, officials with the State Engineer’s office said that additional water rights are not available, but that if companies want additional rights, they could purchase them from other owners. Most of the water needed for oil shale development is likely to come first from surface flows, as groundwater is more costly to extract and generally of poorer quality in the Piceance and Uintah Basins. However, companies may use groundwater in the future should they experience difficulties in obtaining rights to surface water. Furthermore, water is likely to come initially from surface sources immediately adjacent to development, such as the White River and its tributaries that flow through the heart of oil shale country in Colorado and Utah, because the cost of pumping water over long distances and rugged terrain would be high, according to water experts. Developing a sizable oil shale industry may take many years—perhaps 15 or 20 years by some industry and government estimates—and such an industry may have to contend with increased demands for water to meet other needs. For example, substantial population growth and its correlative demand for water are expected in the oil shale regions of Colorado and Utah. State officials expect that the population within the region surrounding the Yampa, White, and Green Rivers in Colorado will triple between 2005 and 2050. These officials expect that this added population and corresponding economic growth by 2030 will increase municipal and industrial demands for water, exclusive of oil shale development, by about 22,000 acre-feet per year, or a 76 percent increase from 2000. Similarly in Utah, state officials expect the population of the Uintah Basin to more than double its 1998 size by 2050 and that correlative municipal and industrial water demands will increase by 7,000 acre-feet per year, or an increase of about 30 percent since the mid- 1990s. Municipal officials in two communities adjacent to proposed oil shale development in Colorado said that they were confident of meeting their future municipal and industrial demands from their existing senior water rights and as such will probably not be affected by the water needs of a future oil shale industry. However, large withdrawals could impact agricultural interests and other downstream water users in both states, as oil shale companies may purchase existing irrigation and agricultural rights for their oil shale operations. State water officials in Colorado told us that some holders of senior agricultural rights have already sold their rights to oil shale companies. A future oil shale industry may also need to contend with a general decreased physical supply of water regionwide due to climate change; Colorado’s and Utah’s obligations under interstate compacts that could further reduce the amount of water available for development; and limitations on withdrawals from the Colorado River system to meet the requirements to protect certain fish species under the Endangered Species Act. Oil shale companies own rights to a large amount of water in the oil shale regions of Colorado and Utah, but we concluded that there are physical and legal limits on how much water they can ultimately withdraw from the region’s waterways, which will limit the eventual size of the overall industry. Physical limits are set by the amount of water that is present in the river, and the legal limit is the sum of the water that can be legally withdrawn from the river as specified in the water rights held by downstream users. Our analysis of the development of an oil shale industry at Meeker, Colorado, based on the water available in the White River, suggests that there is much more water than is needed to support the water needs for all the sizes of an industry that would rely on mining and surface retorting that we considered. However, if an industry that uses in-situ extraction develops, water could be a limiting factor just by the amount of water physically available in the White River. Since 2006, the federal government has sponsored over $22 million of research on oil shale development and of this amount about $5 million was spent on research related to the nexus between oil shale development and water. Even with this research, we reported that there is a lack of comprehensive data on the condition of surface water and groundwater and their interaction, which limits efforts to monitor and mitigate the future impacts of oil shale development. Currently DOE funds most of the research related to oil shale and water resources, including research on water rights, water needs, and the impacts of oil shale development on water quality. Interior also performs limited research on characterizing surface and groundwater resources in oil shale areas and is planning some limited monitoring of water resources. However, there is general agreement among those we contacted— including state personnel who regulate water resources, federal agency officials responsible for studying water, water researchers, and water experts— that this ongoing research is insufficient to monitor and then subsequently mitigate the potential impacts of oil shale development on water resources. Specifically, they identified the need for additional research in the following areas:  Comprehensive baseline conditions for surface water and groundwater quality and quantity. Experts we spoke with said that more data are needed on the chemistry of surface water and groundwater, properties of aquifers, age of groundwater, flow rates and patterns of groundwater, and groundwater levels in wells.  Groundwater movement and its interaction with surface water. Experts we spoke with said that additional research is needed to develop a better understanding of the interactions between groundwater and surface water and of groundwater movement for modeling possible transport of contaminants. In this context, more subsurface imaging and visualization are needed to build geologic and hydrologic models and to study how quickly groundwater migrates. Such tools will aid in monitoring and providing data that does not currently exist. In addition, we found that DOE and Interior officials seldom formally share the information on their water-related research with each other. USGS officials who conduct water-related research at Interior and DOE officials at the National Energy Technology Laboratory (NETL), which sponsors the majority of the water and oil shale research at DOE, stated they have not talked with each other about such research in almost 3 years. USGS staff noted that although DOE is currently sponsoring most of the water- related research, USGS researchers were unaware of most of these projects. In addition, staff at DOE’s Los Alamos National Laboratory who are conducting some water-related research for DOE noted that various researchers are not always aware of studies conducted by others and stated that there needs to be a better mechanism for sharing this research. Based on our review, we found there does not appear to be any formal mechanism for sharing water-related research activities and results among Interior, DOE, and state regulatory agencies in Colorado and Utah. The last general meeting to discuss oil shale research among these agencies was in October 2007, but there have been opportunities to informally share research at the annual Oil Shale Symposium, such as the one that was conducted at the Colorado School of Mines in October 2010. Of the various officials with the federal and state agencies, representatives from research organizations, and water experts we contacted, many noted that federal and state agencies could benefit from collaboration with each other on water-related research involving oil shale. Representatives from NETL stated that collaboration should occur at least every 6 months. As a result of our findings, we made three recommendations in our October 2010 report to the Secretary of the Interior. Specifically, we stated that to prepare for possible impacts from the future development of oil shale, the Secretary should direct the appropriate managers in the Bureau of Land Management and the U.S. Geological Survey to  establish comprehensive baseline conditions for groundwater and surface water quality, including their chemistry, and quantity in the Piceance and Uintah Basins to aid in the future monitoring of impacts from oil shale development in the Green River Formation;  model regional groundwater movement and the interaction between groundwater and surface water, in light of aquifer properties and the age of groundwater, so as to help in understanding the transport of possible contaminants derived from the development of oil shale; and coordinate with the Department of Energy and state agencies with regulatory authority over water resources in implementing these recommendations, and to provide a mechanism for water-related research collaboration and sharing of results. Interior generally concurred with our recommendations. In response to our first recommendation, Interior commented that there are ongoing USGS efforts to analyze existing water quality data in the Piceance Basin and to monitor surface water quality and quantity in both basins but that it also plans to conduct more comprehensive assessments in the future. With regard to our second recommendation, Interior stated that BLM and USGS are working on identifying shared needs for modeling. Interior underscored the importance of modeling prior to the approval of large- scale oil shale development and cited the importance of the industry’s testing of various technologies on federal RD&D leases to determine if production can occur in commercial quantities and to develop an accurate determination of potential water uses for each technology. In support of our third recommendation to coordinate with DOE and state agencies with regulatory authority over water resources, Interior stated that BLM and USGS are working to improve such coordination and noted current ongoing efforts with state and local authorities. In conclusion, Mr. Chairman, attempts to commercially develop oil shale in the United States have spanned nearly a century. During this time, the industry has focused primarily on overcoming technological challenges and trying to develop a commercially viable operation. However, there are a number of uncertainties associated with the impacts that a commercially viable oil shale industry could have on water availability and quality that should be an important focus for federal agencies and policymakers going forward. Chairman Lamborn, Ranking Member Holt, and Members of the Committee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. For further information about this testimony, please contact Anu K. Mittal, Director, Natural Resources and Environment team, (202) 512-3841 or [email protected]. In addition to the individual named above, key contributors to this testimony were Dan Haas (Assistant Director), Quindi Franco, Alison O’Neill, Barbara Timmerman, and Lisa Vojta. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Oil shale deposits in Colorado, Utah, and Wyoming are estimated to contain up to 3 trillion barrels of oil--or an amount equal to the world's proven oil reserves. About 72 percent of this oil shale is located beneath federal lands managed by the Department of the Interior's Bureau of Land Management, making the federal government a key player in its potential development. Extracting this oil is expected to require substantial amounts of water and could impact groundwater and surface water. GAO's testimony is based on its October 2010 report on the impacts of oil shale development (GAO-11-35). This testimony summarizes (1) what is known about the potential impacts of oil shale development on surface water and groundwater, (2) what is known about the amount of water that may be needed for commercial oil shale development, (3) the extent to which water will likely be available for such development and its source, and (4) federal research efforts to address impacts to water resources from commercial oil shale development. For its October 2010 report, GAO reviewed studies and interviewed water experts, officials from federal and state agencies, and oil shale industry representatives. Oil shale development could have significant impacts on the quality and quantity of water resources, but the magnitude is unknown because technologies are not yet commercially proven, the size of a future industry is uncertain, and knowledge of current water conditions is limited. In the absence of effective mitigation measures, water resources could be impacted by disturbing the ground surface during the construction of roads and production facilities, withdrawing water from streams and aquifers for oil shale operations, underground mining and extraction, and discharging waste waters produced from or used in such operations. Commercial oil shale development requires water for numerous activities throughout its life cycle, but estimates vary widely for the amount of water needed to commercially produce oil shale primarily because of the unproven nature of some technologies and because the various ways of generating power for operations use differing quantities of water. GAO's review of available studies indicated that the expected total water needs for the entire life cycle of oil shale production range from about 1 barrel (or 42 gallons) to 12 barrels of water per barrel of oil produced from in-situ (underground heating) operations, with an average of about 5 barrels, and from about 2 to 4 barrels of water per barrel of oil produced from mining operations with surface heating, with an average of about 3 barrels. GAO reported that water is likely to be available for the initial development of an oil shale industry but that the size of an industry in Colorado or Utah may eventually be limited by water availability. Water limitations may arise from increases in water demand from municipal and industrial users, the potential of reduced water supplies from a warming climate, the need to fulfill obligations under interstate water compacts, and decreases on withdrawals from the Colorado River system to meet the requirements to protect threatened and endangered fish species. The federal government sponsors research on the impacts of oil shale on water resources through the Departments of Energy (DOE) and Interior. Even with this research, nearly all of the officials and experts that GAO contacted said that there are insufficient data to understand baseline conditions of water resources in the oil shale regions of Colorado and Utah and that additional research is needed to understand the movement of groundwater and its interaction with surface water. Federal agency officials also told GAO that they seldom coordinate water-related oil shale research among themselves or with state agencies that regulate water. In its October report, GAO made three recommendations to the Secretary of the Interior to prepare for the possible impacts of oil shale development, including the establishment of comprehensive baseline conditions for water resources in the oil shale regions of Colorado and Utah, modeling regional groundwater movement, and coordinating on water-related research with DOE and state agencies involved in water regulation. The Department of the Interior generally concurred with the recommendations. GAO is making no new recommendations at this time.
As of December 2014, 39 states were using comprehensive, risk-based managed care in their Medicaid programs. States vary considerably in the extent to which they enroll beneficiaries in managed care versus delivering care through the more traditional fee-for-service (FFS) model. For example, as of July 2013—the most recent enrollment data available—rates of managed care enrollment among states using it ranged from 7 to 100 percent. (See fig. 1.) As is true with Medicaid FFS, states vary in terms of the populations and services included in their managed care programs. For example, some states carve out certain types of services from their managed care contracts, such as behavioral health care services or dental services, and provide those services separately, while other states include those services. States have the flexibility within federal parameters to determine whether enrollment in managed care will be mandatory (required for beneficiaries) or voluntary (beneficiaries have a choice between managed care and FFS). Further, states may have mandatory enrollment for some populations, but voluntary enrollment for others, and can also transition populations between voluntary and mandatory enrollment over time. Under contracts between states and MCOs, the state pays the MCO a set amount (or “rate”) per member (or beneficiary) per month to provide all covered services and, in turn, the MCO pays providers to deliver the services. In addition to covering medical services for beneficiaries, the payment rates are expected to cover an MCO’s administrative expenses and profit. Under such contracts, the MCO is at risk for any costs above the agreed upon rate. Rates must, by law, be actuarially sound, meaning that they must be appropriate for the populations to be covered and for the services furnished. Rates can vary by type of beneficiary to reflect estimated differences in utilization. For example, a state may have different rates for children, adults under age 65, and adults 65 years of age and older. Rates may also differ by geographic region within a state. While not applicable to MCOs operating in Medicaid, PPACA requires that private insurers operating in the large group insurance market, as well as the organizations and sponsors offering coverage through the Medicare Advantage (MA) and Medicare Part D programs, meet or exceed an 85 percent MLR standard. Furthermore, private insurers operating in the individual and small group markets must meet an 80 percent MLR minimum. To comply with these standards, under PPACA, insurers, MA organizations, and Part D sponsors with a relatively small enrollment have some flexibility in accounting for the disproportionate effect of random claims variability (where actual claims experience varies significantly from what is expected) on their ability to meet the MLR standard. While all insurers may experience some random claims variability, the effect of these deviations is greater for insurers with a small customer base. PPACA mandated a specific MLR formula for private insurers, and CMS rules implementing MLRs in Medicare established a specific formula for MA organizations and Part D sponsors. For example, the MLR for private insurers expresses the percentage of premiums collected (less state and federal taxes, and licensing and regulatory fees) that insurers spend on their beneficiaries’ medical claims and quality improvement activities. In general, the greater the share of beneficiaries’ premiums spent on medical claims and quality initiatives, the higher the MLR. (See fig. 2.) MLR requirements established under PPACA include as expenses for quality improvement activities that are primarily designed to (1) improve health outcomes; (2) prevent hospital readmissions; (3) improve patient safety and reduce medical errors; or (4) implement, promote, and increase wellness and health activities. Insurers are also allowed to include certain other expenses, such as health information technology required to accomplish activities to improve healthcare quality. As such, insurers are able to include expenses for a variety of activities in the numerator of the MLR formula. Examples of such quality improvement activities include case management, care coordination, medication and care compliance initiatives, patient-centered education and counseling, activities to lower the risk of facility-acquired infections, and wellness assessments. Under these requirements, for each year that a private insurer does not meet the required MLR minimum, it must pay rebates to its policyholders. Likewise, MA organizations and Medicare Part D sponsors must pay CMS a remittance if they do not meet the required MLR minimum in a contract year. MA organizations and Medicare Part D plans are also subject to enrollment sanctions and contract termination after failing to meet the MLR requirement for three and five consecutive years, respectively. States are not required under federal policy to have contracted MCOs meet a minimum MLR standard. However, states may choose to establish their own MLR standards governing the proportion of capitation payments MCOs may be required to spend to provide medical services to beneficiaries, thus, limiting the amount of payments allowed for MCO profit and administrative expenses. States may also choose to establish their own formula for calculating MLRs for contracted MCOs. When automatically assigning a beneficiary to a Medicaid managed care plan offered by an MCO, states may offer beneficiaries a certain amount of time (the length of which is at the discretion of the state) to choose a plan at the time of enrollment. If the beneficiary does not choose the plan within that time frame, the state automatically assigns—or defaults—the beneficiary to a plan. Alternatively, in some cases, states can automatically assign beneficiaries to a plan at the time of enrollment, providing them no initial period during which to choose among plan offerings. The beneficiary is then given a certain number of days after the assignment is made to opt out and choose another plan if they do not want to be enrolled in the one into which they were assigned. Current Medicaid policy requires states to consider certain factors—with some factors taking priority—in designing auto assignment methods, but also allows states discretion to consider other factors. States using a default enrollment process must give priority to maintaining existing provider-beneficiary relationships and relationships with providers that have traditionally served Medicaid beneficiaries. If that is not possible, states must equitably distribute beneficiaries among participating plans. However, states may also consider other factors, such as a beneficiary’s geographic location or the enrollment preferences of their family members. Federal spending for Medicaid managed care nationally increased significantly from federal fiscal years 2004 through 2014, representing over a third of total federal Medicaid spending in 2014. Total payments to MCOs and average per beneficiary payments showed considerable variation across selected states in state fiscal year 2014. Federal spending for Medicaid managed care increased significantly over the past decade—from $27 billion in fiscal year 2004 to $107 billion in fiscal year 2014—and represented a significantly larger portion of total federal Medicaid spending in 2014 than it did 10 years earlier. Specifically, managed care expenditures grew as a proportion of overall federal Medicaid spending from 13 percent in fiscal year 2004 to 38 percent in fiscal year 2014. (See fig. 3.) A number of factors have likely contributed to growth in federal expenditures, including states increasing the proportion of their population that they enroll in managed care. For example, in state fiscal year 2014, Florida expanded the populations for which managed care was mandatory, which increased enrollment from 1.4 million to just fewer than 3 million beneficiaries, according to state officials. There was also significant growth from fiscal years 2013 through 2014, which suggests that the Medicaid expansion to low-income adults—and the increased availability of federal funds beginning in January 2014—also contributed to growth. CMS’s Office of the Actuary reported in 2015 that Medicaid expenditures for and enrollment in managed care has grown in recent years and projected accelerated growth over the next 10 years. The office attributed this acceleration to many states continuing to enroll those newly eligible due to the Medicaid expansion in managed care and the expanded use of managed care to cover the aged and disabled, and LTSS. Federal expenditures for managed care varied widely by state—ranging from $5.8 million in North Dakota to $14.3 billion in California—in fiscal year 2014. (See appendix II for expenditures by state.) Also, in fiscal year 2014, federal spending for managed care as a percentage of total federal Medicaid spending varied considerably across the 39 states with managed care. For example, in 11 states, expenditures for managed care represented less than 25 percent of total federal Medicaid expenditures, while in 3 states such expenditures represented 75 percent or more of total federal Medicaid expenditures. (See fig. 4.) Consistent with the national trend, in seven of our eight selected states, the proportion of total federal Medicaid spending represented by managed care was significantly higher in fiscal year 2014 than in fiscal year 2004, with increases ranging from 17 to 59 percent. For one state— Arizona—the proportion of managed care expenditures as a percentage of total Medicaid expenditures declined from 82 percent in 2004 to 69 percent in 2014. However, state officials attributed the entire decline to a change in how behavioral health expenditures were reported by the state, with the 2004 data including behavioral health expenditures and 2014 not including them. (See fig. 5.) Reflecting variation common in the Medicaid program, generally, state payments to MCOs varied considerably across and within states. In state fiscal year 2014, total capitated payments to MCOs in the eight selected states ranged from $1.3 billion in Louisiana to $18.2 billion in California. Payments to individual MCOs ranged from $17.3 million to $3.1 billion across states and varied widely within some states, with at least one MCO receiving payments above $1 billion in six of the eight states. (See table 1.) The average annual amount of payment per beneficiary also varied significantly across the selected states. Specifically, average capitated payments per beneficiary ranged from $2,784 in California to $5,180 in Pennsylvania for state fiscal year 2014. (See table 2.) A number of factors may have contributed to the variation in average per beneficiary cost. The populations the state enrolled in managed care: States varied in the populations they enrolled in managed care. For example, three of our selected states enrolled elderly or disabled beneficiaries qualifying for LTSS in their managed care programs, while the remaining five did not. In Arizona, the average annual payment per beneficiary for the population qualifying for LTSS was $37,700 compared to the average annual payment of $3,000 for all other populations. The services included in the capitation rate: Some of our selected states carved certain types of services out of their programs and provided them separately. For example, Arizona provided behavioral health care through separate programs for certain populations. In contrast, Tennessee included those services in its program. Geographic differences in costs and utilization of care: Our review of approved rates indicated that rates for similar populations could differ across states. Because rates reflect a state’s assumptions on utilization and cost for a given population and are generally developed using cost data from previous years, the variation across states likely reflects some geographic differences in costs and utilization. For example, payment rates for children under the age of 1 ranged from $416 to $769 per beneficiary per month across four of our selected states that specified a rate for that age group. Similarly, in the four states with a separate rate for maternity care, rates ranged from about $4,960 in areas of one state to over $11,000 in certain areas of another state. Rates also ranged regionally within several states. For example, one state approved rates at the county level and its rates for children under the age of 1 ranged from $416 to $551 per beneficiary per month. In past work, we found that service utilization in managed care varied by state and by population—including whether beneficiaries were enrolled for a full year or part of a year—and that MCO payments to providers for particular services can also vary considerably across states. Five of our eight selected states—Arizona, Florida, Louisiana, Michigan, and Washington— required MCOs to annually meet a minimum MLR percentage. The MLR minimums required in the five states generally ranged from 83 to 85 percent for most populations. The exception to this range was that Washington set a separate MLR minimum for its program covering beneficiaries who are blind or disabled at 88 percent. The required minimums in the five states were similar to the 85 percent federal MLR minimum mandated by PPACA for private, large group insurers, MA organizations, and Part D sponsors. (See table 3.) The methodologies used to calculate the MLRs differed across the five states with required MLR minimums. These differences in methodology were most pronounced regarding whether the state counted MCO expenses for activities to improve health care quality as expenses that qualify toward meeting the state’s required minimum. Three of the five states specifically allowed MCOs to include activities to improve health care quality, as PPACA allows for private insurers, MA organizations, and Part D plan sponsors. The remaining two states either accounted for more limited quality activities—for example, Arizona allowed for the inclusion of case management for its LTSS population—or did not account for them at all. All else being equal, states that allow MCOs to include the costs of quality activities would expect to see higher MLRs. We also found differences in how states defined medical expenditures for inclusion in the MLR calculation. For example, Florida allowed MCOs to include funds provided to graduate medical education institutions to underwrite residency position costs and contributions to the state trust fund for the purpose of supporting Medicaid and indigent care in the numerator as medical expenses. Three of the remaining eight selected states—California, Pennsylvania, and Tennessee—did not require MCOs to meet MLR minimums, but did monitor MLRs. For example, Tennessee officials explained that the state has routine processes in place to monitor MLR performance. The state requires MCOs to submit annual MLR reports, and according to officials, will follow up with MCOs if it has concerns about reported MLRs. Officials from California told us the state uses MCO MLRs to observe trends for most populations in its managed care programs. The state Medicaid agency does not require MCOs to submit MLR-specific data, but does calculate MLRs for MCOs using their reported financial information. Additionally, in 2014, for the adult expansion population only, California applied an MLR risk corridor of 85 to 95 percent to MCOs. While not an MLR minimum, this risk corridor represented the range of MLRs that the state maintains for the adult expansion population covered by MCOs. Data provided by the five selected states with required MLR minimums indicated that MLRs were above the required minimums for all MCOs in 2014. Among the three selected states without required minimums, the average reported MLRs fell generally within the same range as the states with required minimums. (See table 4.) Furthermore, officials from the five states with required MLR minimums told us that their participating MCOs generally met the MLR minimums. A high percentage of MCOs meeting the MLR minimums may be expected; for example, we found in previous work that over three quarters of private insurers met or exceeded the PPACA MLR minimum requirement in 2011 and 2012. If MCOs do not meet the minimum MLR requirements, there are a range of sanctions that our selected states might impose, but officials from the five states with required minimums confirmed that they had employed sanctions related to MLR requirements rarely if at all in the last three contract or fiscal years. Potential sanctions outlined in MCO contracts included requiring MCOs to submit corrective action plans, restricting an MCO’s enrollment by freezing automatic assignment, or terminating an MCO from the managed care program. Two of the five states with MLR minimum requirements for Medicaid managed care—Louisiana and Washington—require MCOs to reimburse the state if the MLR minimum requirements are not met. Officials from Louisiana—which requires MCOs to pay a rebate—were not aware of any occasion where the state sought a rebate from an MCO. Washington officials told us that one of its MCOs did not meet MLR minimums for the July 1, 2012, through December 31, 2013, contract, and as a result, was required to pay the state over $4 million. Information from two states indicated that they also monitor MCOs with MLRs that they consider to be high, because high MLRs could be an indication that rates are not adequate. Specifically, Florida indicated that the state will monitor the financial performance of MCOs with MLRs at or above 95 percent. In addition, although Tennessee does not have a required MLR minimum, officials indicated that they engage MCO representatives about MCO fiscal performance if MLRs are trending above 92 percent, as well as if they are trending below 85 percent. Officials told us the state also used the MLR as a measure to inform their rate setting process, which is done to determine whether the rates paid to MCOs are appropriate and sufficient. Interviews with state officials indicated that MLR standards are just one of several methods used by states in their effort to ensure that MCOs are using an appropriate amount of payments to provide medical care. Officials from seven of the eight states indicated that they also use the rate setting process, during which states review data on medical and administrative costs for prior years. In Tennessee, officials told us that the state surveys MCOs to obtain specific data regarding their administrative costs. Officials from several states with MLR minimums questioned their effectiveness and stated that they may not be applicable to all populations and programs. For example, officials from one state with a required MLR minimum explained that if an MCO disproportionately covers an inherently high-expenditure population (such as patients with human immunodeficiency virus), it will be easier for it to meet the MLR minimum than another MCO that has an inherently less expensive patient mix (such as children). Furthermore, officials from two other states with required minimums told us that the potential subjectivity in classifying certain expenses may dilute the usefulness of the MLR. CMS officials told us that MLR minimums are one measure of assessing MCO performance and that MLRs should be interpreted in the larger context. Officials noted that if a state were to set a 90 percent minimum and an MCO reports an MLR of 80 percent, it could be that the rates were set too high and the state overpaid. It could also mean that the rates were set appropriately, but the MCO performed very efficiently. The eight selected states varied in their methodologies for automatically assigning beneficiaries to plans offered by MCOs, but all of them first considered beneficiary factors, such as prior participation in a plan offered by a Medicaid MCO. Louisiana, for example, assessed four specific beneficiary factors to determine plan auto assignments; namely, whether the beneficiary had (1) family members who participated in a particular health plan; (2) a prior primary care provider who is participating in a Medicaid plan in the state; (3) prior claims history that could be used to identify a most frequently visited primary care provider; and (4) a Medicaid plan in which they were previously enrolled. Tennessee’s auto assignment method also initially considered beneficiary factors, for example, by re-enrolling beneficiaries who had lost Medicaid eligibility in the plan in which they were previously enrolled. Washington’s and Michigan’s processes prioritized automatically assigning beneficiaries to the same plan as family members. After considering beneficiary factors, four states—Arizona, California, Michigan, and Washington—also considered a variety of plan performance factors, such as performance on quality measures, in their auto assignment methodologies. Michigan and California assigned points to plans—that is, they gave preference to plans based on performance on multiple measures, such as the provision of well-child visits or comprehensive diabetes care. Michigan officials told us they change the performance measures considered on a quarterly basis to avoid a preference for plans that consistently do well in only a few measures. California also awarded points to account for plan improvement. Beginning in July 2014, Washington began considering plan performance on the completion of beneficiaries’ initial health screens. According to state officials, including this measure in its auto assignment methods has been a useful tool in helping the state increase the initial health screening rate among beneficiaries. While not all of our selected states linked auto assignment to performance on quality measures, they all required MCOs to report on quality measures, including nationally recognized or other state-developed measures. Further, six of the eight selected states required MCOs to be accredited by the National Committee on Quality Assurance (NCQA) or other accrediting organization, a process that includes an independent review of the MCO and assessment of performance on quality. (See appendix III for more information on the selected states’ methods for overseeing MCO quality.) Three of these states also considered administrative, cost, or other plan performance factors in their auto assignment methodologies. For example, Michigan assigned points based on administrative measures, such as performance on claims processing. Arizona’s methodology factored capitation rates and scores on the plan’s contract proposal, with plans with the lowest awarded capitation rate and highest proposal score receiving an advantage in auto assignments. In addition to cost, California’s auto assignment method included plan performance on two safety net measures, with plans earning points based on how the plan compares to the other plan scores in their geographic region. All eight of our selected states considered overall program goals in their auto assignment methods. (See figure 6 for an illustration of a state auto assignment method that considers beneficiary factors, plan performance, and overall program goals.) For example, states made auto assignment decisions based on such goals as ensuring plan capacity to serve additional beneficiaries or managing enrollment distributions across plans in certain geographic markets. Ensuring plan capacity: Florida, Michigan, and Washington considered plan capacity before auto assigning beneficiaries to plans. For example, beginning in July 2014, Washington plans that received auto assignments must demonstrate that they meet a certain capacity threshold to serve eligible beneficiaries in each of five critical provider types, including primary care and hospitals. Managing distribution across plans: Pennsylvania divided beneficiaries equally among plans in a certain geographic area, while Louisiana generally did not assign beneficiaries to plans with greater than or equal to 40 percent of total beneficiaries in the state. Arizona’s auto assignment methodology had provisions to redistribute auto assignments in certain geographic areas where plans have enrollment greater than or equal to 45 percent of the total beneficiaries. Assisting plans entering the program or a new region: An Arizona official told us that the state may give preference during auto assignment to new plans entering the market in a particular region. Similarly, California’s methodology included specific provisions for new plans, crediting those plans with average performance until the plans could produce performance data. The rate of beneficiaries automatically assigned to plans, referred to as the auto assignment rate, varied considerably among states. Selected states’ assignment rates ranged from 23 to 61 percent, with three states reporting rates of 30 percent or less and three other states reporting rates of 50 percent or more. Rates may vary by population, geographic area, and the method the state used to calculate the rate. Population: One state, Arizona, tracked auto assignment rates for its LTSS population and reported a rate about 26 percent lower for this population than for all other populations. An Arizona official noted that there is very little auto assignment among beneficiaries using LTSS because they are typically more engaged in their care and have more outside assistance when initially choosing a plan. Geographic region: Two states also provided information related to how auto assignments can vary by geographic region. For example, the percentage of total auto assignments for a particular plan in 22 Washington counties ranged from 10 percent to 98 percent. Florida officials also told us that rates vary by region, with Miami having a much lower auto assignment rate than other parts of the state. Calculation method: Variation in auto assignment rates among states was likely due, in part, to states not having a common method for calculating the rates. For example, Pennsylvania, a state with a lower auto assignment rate, excluded eligible beneficiaries who did not make a plan selection, but were able to be assigned to the same plan as another active family member. In contrast, Louisiana, a state with a higher auto assignment rate, included such assignments in its calculation. Differences in state enrollment policies, such as the length of time that beneficiaries have to choose a plan before auto assignment, may also contribute to the variation in auto assignment rates. Michigan, for example, reported giving beneficiaries 26 days to select a plan before being auto assigned by the state, while Washington, a state with a higher auto assignment rate, automatically assigned beneficiaries to a managed care plan at the time of enrollment, but gave beneficiaries the option to change plans monthly. Other states may allow beneficiaries to select a plan at the time of enrollment before being auto assigned. For example, according to Louisiana officials, in February 2015, the state began requiring beneficiaries to choose a managed care plan at the time of enrollment, instead of giving beneficiaries 30 days to choose a plan, in an effort to phase out FFS claims processing by the state. Interviews with state officials indicated that states may adjust auto assignment methods. Specifically, officials from three states told us about future plans to change their auto assignment methods. For example, Arizona reviewed its auto assignment percentages at least annually, and indicated that the state may adjust its method to recognize plan performance on quality and administrative measures, such as those related to claims processing and grievances. Tennessee officials said the state plans to incorporate plan quality and cost performance into its auto assignment process. We provided a draft of this report to the Department of Health and Human Services for comment. The Department had no comments. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Health and Human Services and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact Carolyn L. Yocom at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Arizona Health Care Cost Containment System: Mandatory program for aged, disabled (children and adults), children, low-income adults, those dually eligible for Medicare and Medicaid (dual eligibles), and foster care children. Arizona Long Term Care System: Mandatory program for aged, disabled (children and adults), and dual eligibles, all of whom are in need of a nursing home level of care. Two-Plan model: Mandatory and voluntary program in select counties for disabled (children and adults), children, and foster care children. Geographic Managed Care: mandatory and voluntary program in select counties for aged, disabled (children and adults), children, low-income adults, certain dual eligibles, and foster care children. County Organized Health Systems: Mandatory program in select counties for aged, disabled (children and adults), children, low-income adults, certain dual eligibles, foster care children, and American Indians/Alaskan Natives. Managed Medical Assistance: Mandatory and voluntary statewide program for aged, disabled (children and adults) children, low-income adults, dual eligibles, foster care children, and American Indians/Alaskan Natives. Bayou Health: Mandatory statewide program for disabled (children and adults), children, parents, breast and cervical cancer program population (under 65), low-income adults, foster care children, pregnant women, and aged, blind and disabled adults. Comprehensive Health Care Program: Mandatory statewide program for aged, disabled (children and adults), children, low-income adults, and foster care children. HealthChoices: Mandatory statewide program for aged, disabled (children and adults), children, low-income adults, certain dual eligibles, and foster care children. TennCare: Mandatory statewide program for aged, disabled (children and adults), children, low-income adults, certain dual eligibles, and foster care children. Apple Health: Mandatory statewide program for aged, disabled (children and adults), children, and low-income adults. Our eight selected states (Arizona, California, Florida, Louisiana, Michigan, Pennsylvania, Tennessee, and Washington) varied in the methods used to oversee the quality of care provided by contracted managed care organizations (MCOs). See the table below and the discussion that follows for information on the types of quality measures, sanctions, incentives, and accreditation requirements states used and how those methods fit into each state’s broader quality framework. All eight selected states used a combination of Healthcare Effectiveness Data and Information Set (HEDIS) and non-HEDIS measures to assess quality performance of their participating MCOs. HEDIS is a tool used by health plans to measure performance on various dimensions of care and service, including effectiveness of care, access and availability of care, experience of care, utilization and risk adjusted utilization, and relative resource use. In the 2015 HEDIS, 68 of the 83 measures are applicable to Medicaid. With regard to HEDIS measures, all selected states required their participating MCOs to report at least some HEDIS measures that are applicable to Medicaid. Four of the states required MCOs to report all of the Medicaid-applicable HEDIS measures. All selected states either required MCOs to report on specific non- HEDIS measures, or their contracts allowed states to develop non- HEDIS measures that MCOs may have to report. The non-HEDIS measures states required varied. For example, Arizona listed two non-HEDIS measures related to flu shots for adults above the age of 50, while Florida listed 10 non-HEDIS measures, including but not limited to, the provision of annual lipid profiles, the frequency of human immunodeficiency virus disease monitoring lab tests, and transportation timeliness. Non-HEDIS measures may capture similar issues as the HEDIS measures, but in a slightly different manner. For example, there is an adult flu shot HEDIS measure applicable to Medicaid that captures the provision of flu shots for those aged 18 to 64. However, there is no adult flu shot measure applicable to Medicaid that is for ages 65 and older, and no way within the existing HEDIS measure to distinguish older adults. As such, to capture older adults, Arizona uses two non- HEDIS measures in their Medicaid managed care program: flu shots for adults aged 50-64, and flu shots for adults aged 65 and older. Other types of non-HEDIS measures that states are requiring MCOs to report include: Children’s Health Insurance Program Reauthorization Act child and adult core set measures; over- and under-utilization monitoring measures; Agency for Healthcare Research and Quality prevention quality indicators; and other state- defined measures. All but one of our eight selected states set specific standards for performance on one or more quality measures that, if not met, could result in sanctions for MCOs. Sanctions could include requiring MCOs to take corrective actions, financial penalties, or both. Six states—Arizona, California, Florida, Michigan, Pennsylvania, and Tennessee—specified in their contracts that there are minimum requirements for outcomes on performance measures that each MCO was required to meet. These states specified that MCOs that do not meet the requirements may be subject to corrective action plans, financial sanctions, or other types of sanctions. The final state that set standards for performance on quality measures, Louisiana, required MCOs to demonstrate improvement on performance measures and linked sanctions to failure to meet such improvement. The remaining state, Washington, did not include any sanctions that were specifically imposed for failing to meet required performance measure outcomes, though the state could impose sanctions for failure to meet contract terms more generally. With regard to imposing sanctions, generally, states describe a measured, hierarchical approach, starting first with corrective action plans and imposing more severe sanctions if the MCO does not come into compliance with the corrective action plan. Financial penalties may be imposed along with corrective action plans or as a more severe sanction after the corrective action plan. Financial penalties may be structured such that failure to meet a certain threshold percentage on a performance measure will require that the MCO pay a certain amount for each percent difference between the standard and the percentage it reported. For example, Tennessee set a 5 percent threshold for unanswered calls for its MCOs’ nurse triage and advice lines. The state will charge MCOs $25,000 for each full percentage point above 5 percent per month. Some states linked failure to meet certain standards on performance measures to intermediate sanctions as outlined in federal regulations. These sanctions allow for appointing temporary management of the MCO; freezing new enrollments, including auto enrollment; allowing beneficiaries to terminate enrollment; and suspending payment for beneficiaries enrolled after the effective date of the sanction. While there was no consistent method of measuring quality across states, most of the selected states used financial incentives as rewards for MCOs meeting performance standards on certain quality measures. Specifically, six of the eight states established incentives for MCOs performing above a certain benchmark or for improving in performance on selected measures, as shown in the following examples. Pennsylvania allocated incentive dollars to each of nine performance measures, and MCOs earn incentives by meeting benchmark performance and improvement targets. In addition, if the MCO performs above the 50th percentile benchmark on diabetes bundle measures the state will award a diabetes bundle performance payout. Tennessee focused its incentives on performance improvement by offering a bonus payment to MCOs for each HEDIS measure for which it demonstrated significant improvement. Two of the six states that offered incentives—Arizona and Florida— specified that the incentives may be competitive. Arizona assesses MCOs relative to minimum performance standards. A bonus is awarded to one or more MCOs for their performance on certain quality measures. Florida’s managed care contract indicated that the state may decide to offer incentives to all high performing MCOs or to make the high performing MCOs compete for them. The state may also decide not to offer incentives to its MCOs each year. States offering incentives often financed them using capitation payment withholds where the state retains a relatively small percentage of the monthly or annual capitation payments (for example, 1 or 2 percent), and uses it later to reward MCOs that performed well on certain performance measures. Six of the eight selected states required participating MCOs to be accredited by a nationally recognized organization that provides an independent assessment of the quality of care provided by the MCO. MCOs that are accredited by these organizations meet quality standards related to various aspects such as consumer protection, case management, and quality improvement activities. The National Committee on Quality Assurance (NCQA) was the most commonly used accrediting organization in the selected states. All six states either named NCQA as a preferred accreditation body or only allowed NCQA accreditation. States also may have allowed MCOs to be accredited through another accrediting organization, such as URAC or the Accreditation Association for Ambulatory Health Care. In discussions with some states, one reason cited for requiring accreditation was that it facilitated the comparison of MCOs because of consistency of data requirements. The two states that did not require accreditation explained that they were concerned about the financial burden on the MCOs associated with the accreditation process. All of the selected states had a written quality strategy for Medicaid managed care that they submitted to the Centers for Medicare & Medicaid Services (CMS) per a federal requirement to do so. The state quality strategy must include a discussion of performance measures, performance improvement projects, and state quality oversight plans. States are required to submit a revised strategy to CMS whenever significant changes are made. CMS reviews states’ quality strategy documents as submitted and does not require them to be updated within a specified timeframe. CMS is proposing to change this; its proposed rule would require states to update their quality strategy documents at least once every 3 years. According to a CMS quality strategy tracking document, some states submit their quality strategy document to CMS annually, while others have not submitted them to CMS for 3 or more years. Among our eight selected states, five had submitted updated versions of their quality strategy to CMS between 2012 and 2014, with three of these five states submitting to CMS annually. The remaining three selected states had not submitted an updated version of their quality strategies to CMS in the last 4 to 8 years, according to CMS’s tracking document. Nationally, the tracking document indicates that 8 of the 39 states with comprehensive, risk-based managed care have not submitted updated quality strategy documents to CMS in the last 3 years. As required by federal law, all of our selected states completed an external quality review report in 2014. In an external quality review, an independent organization specializing in external quality reviews evaluates the quality, timeliness, and access to health care services provided by MCOs to their Medicaid beneficiaries. External quality review reports include discussions of MCO’s strengths, areas for improvement, and recommendations, as shown in the examples below. An external quality review report for one of our selected states indicated that the strengths of the MCOs participating in the state’s managed care program were that they demonstrated high levels of compliance with contractual requirements and that they improved in their performance on quality measures from previous years. As an opportunity for improvement, this report also mentioned that MCOs could work to improve performance on certain HEDIS measures for which they were performing below the 50th percentile. Another state’s external quality review report recommended MCOs identify barriers that affect access to care for children’s services after the performance measures assessment showed poor performance in well-child and dental visits. The report recommended increased transportation coordination and expanded office hours, as well as educational efforts to increase beneficiary awareness and understanding of available services. In addition to the contacts named above, Susan Barnidge, Assistant Director; George Bogart; Shamonda Braithwaite; Laura Sutton Elsberg; Giselle Hicks; Drew Long; and Vikki Porter made key contributions to this report.
The importance of managed care in Medicaid—under which states contract with MCOs to provide a specific set of services—has increased as states expand eligibility for Medicaid under PPACA and increasingly move populations with complex health needs into managed care. States have flexibility within broad federal parameters to design and implement their Medicaid programs, and therefore play a critical role in overseeing managed care. GAO was asked to examine managed care expenditures and provide information on certain components of state oversight of Medicaid managed care. In this report, GAO analyzes (1) federal expenditures for Medicaid managed care and the range in selected states' payments made to MCOs; (2) selected states' MLR standards and how they compare with federal standards for other sources of health coverage; and (3) selected states' methods for automatically assigning Medicaid beneficiaries to MCO plans. GAO analyzed federal data on Medicaid expenditures for comprehensive risk-based managed care. GAO selected eight states because they used managed care for some portion of their Medicaid population and were geographically diverse. For these states, GAO reviewed state payment data and documentation, including contracts with MCOs, and interviewed state officials. GAO also reviewed federal laws to describe MLR minimums in Medicare and the private insurance market. The Department of Health and Human Services had no comments on this report. Federal spending for Medicaid managed care increased significantly from fiscal year 2004 through fiscal year 2014 (from $27 billion to $107 billion), and represented 38 percent of total federal Medicaid spending in fiscal year 2014. Consistent with this national trend, managed care as a proportion of total federal Medicaid spending was higher in seven of eight selected states in fiscal year 2014 compared with fiscal year 2004. Total and average per beneficiary payments by states to managed care organizations (MCOs) varied considerably across the eight selected states in state fiscal year 2014. For example, total payments ranged from $1.3 billion in one state to $18.2 billion in another, and average payments per beneficiary ranged from about $2,800 to about $5,200. While not required by federal policy to do so, five of the eight selected states required MCOs to annually meet minimum medical loss ratio (MLR) percentages—standards that ensure a certain proportion of payments are for medical care and, in effect, limit the amount that can be used for administrative cost and profit. These state minimums generally ranged from 83 to 85 percent, similar to the 85 percent minimums established in the Patient Protection and Affordable Care Act (PPACA) for other sources of health coverage. All MCOs in the five states had MLRs in state fiscal year 2014 that were above the state-required minimums. GAO also found that all eight selected states focused on beneficiary factors, such as assigning a beneficiary to the same managed care plan in which a family member is enrolled, when the state selects a plan for the beneficiary in the absence of the beneficiary choosing a plan—referred to as auto assignment. States also considered plan performance, for example, on quality measures and program goals, such as achieving a certain distribution of enrollment across plans. Auto assignments of beneficiaries ranged from 23 to 61 percent of managed care enrollees across the seven selected states that tracked such data.
CDC is one of the major operating components of HHS, which acts as the federal government’s principal agency for protecting the health of all Americans. CDC serves as the national focal point for developing and applying disease prevention and control, environmental health, and health promotion and education activities designed to improve the health of Americans. CDC is also responsible for leading national efforts to detect, respond to, and prevent illnesses and injuries that result from the release of biological, chemical, or radiological agents. CDC was originally established in 1946 as the Communicable Disease Center with the mission to help state and local health officials in the fight against malaria, typhus, and other communicable diseases. Over the years, CDC’s mission and scope of work have continued to expand in concert with public health needs. Commensurate with its increased scope of work, CDC’s budget and staff have grown. In 1946, the agency had a budget of about $1 million and had over 360 FTEs. In fiscal year 2003, CDC managed a budget of almost $7 billion and had over 8,800 FTEs. (See fig. 1.) To achieve its mission, CDC relies on an array of external partners, including public health associations, state and local public health agencies, schools and universities, nonprofit and volunteer organizations, international health organizations, and others. CDC collaborates with these partners to monitor the public’s health, detect and investigate disease outbreaks, conduct research to enhance prevention, develop and advocate public health policies, implement prevention strategies, promote healthy behaviors, foster safe and healthful environments, and provide training. CDC provides varying levels of support to its partners through funding, technical assistance, information sharing, and personnel. In fiscal year 2002, CDC awarded 69 percent of its total budget to partners through financial assistance, such as cooperative agreements and grants. The majority of these funds—about 75 percent—were disbursed to state health departments. The remaining 25 percent of these funds were disbursed to various other public and private entities. CDC’s workforce consists of 170 job occupations including physicians, statisticians, epidemiologists, laboratory experts, behavioral scientists, and health communicators. Seventy-eight percent of CDC’s workforce consists of permanent civil service staff. U.S. Public Health Service Commissioned Corps employees account for 10 percent of the workforce, and temporary employees make up the remaining 12 percent. Most of CDC’s staff are dispersed across over 30 locations in Atlanta, Georgia. CDC also has more than 2,000 employees at other locations in the United States. (See fig. 2.) Additional CDC staff are deployed to more than 37 foreign countries, assigned to 47 state health departments, and dispersed to numerous local health agencies on both short- and long-term assignments. CDC’s organization consists of OD and 11 centers. OD consists of the CDC Director’s office and 12 separate staff offices. (See fig. 3.) OD manages and directs the agency’s activities; provides overall direction to, and coordination of, its scientific and medical programs; and provides leadership, coordination, and assessment of administrative management activities. The individual OD staff offices are responsible for managing crosscutting scientific functions, such as global health and minority health, as well as support functions, including financial management, grants management, human capital, and information technology. Each of CDC’s centers interacts with the agency’s external partners by providing various means of assistance, such as funding and training. Each center has an organizational structure that includes a director’s office, programmatic divisions, and branches, in most cases. The centers also have their own budgets, which they administer. Eight of the centers have their own mission statements, and several have developed their own strategic plans. CDC also performs many of the administrative functions for ATSDR. The Director of CDC serves as the Administrator of ATSDR, which was established within the Public Health Service by the Comprehensive Environmental Response, Compensation, and Liability Act of 1980. ATSDR works to prevent exposures to hazardous wastes and environmental spills of hazardous substances. Headquartered in Atlanta, the agency has 10 regional offices and an office in Washington, D.C. It also has a multidisciplinary staff of about 400 employees. For many years, ATSDR has worked closely with CDC’s National Center for Environmental Health (NCEH), which is responsible for providing national leadership in preventing and controlling disease associated with environmental causes. To foster greater efficiency, NCEH and ATSDR signed a statement of intent in January 2003 to consolidate their administrative and management functions for financial savings. In August 2003, CDC’s OD announced HHS’s approval for a single director to lead both ATSDR and NCEH. Final approval of this consolidation effort was completed on December 16, 2003. The restructuring of the executive management team in CDC’s top office, despite certain merits, has shortcomings with respect to agency oversight. A positive OD change made in 2003 was the assignment of an OD official other than the agency’s Director to provide oversight authority for the agency’s operations units, such as financial management and information technology. However, no OD official, other than the Director, has explicit responsibility for overseeing the centers’ programmatic work. Another positive change made in 2003 was to align OD management team positions with broad agency mission themes that cut across individual programs and organizational units. However, despite the intention for the themes to foster collaboration among CDC’s 11 centers and with its external partners, clear connections between the management team’s deputy positions, the mission themes, and agency mission activities have not been made. In January 2003, as part of the agency’s transformation efforts, CDC’s Director announced an OD management team consisting of five senior officials, including a COO, two deputies, a senior advisor, and a Chief of Staff. A beneficial change in OD’s structure was the creation of a COO with clear oversight authority over the agency’s operations units, positioning OD to oversee these areas appropriately. However, no similar position or combination of positions has been established in OD to oversee the programs and activities of the centers, as no one below the Director on OD’s management team has direct line authority for the centers’ programmatic work. This also holds true for the three officials added to the OD management team as of fall 2003—the Director of the CDC Washington Office, the Senior Advisor to the Director, and the Associate Director for Terrorism Preparedness and Response. (See fig. 4.) A look at the roles of OD’s management team highlights a structural weakness in oversight authority for the centers’ programmatic work. COO. This official has oversight responsibility for the agency’s core business operations, including financial management, procurement and grants, human resources, and information technology, among others. CDC’s COO is consistent with a commonly agreed-upon governance principle that “a single point” within an agency should have the responsibility and authority for the agency’s management functions. It also parallels the experience of successful organizations that place this type of management position among the agency’s top leadership. Deputy Director for Science and Public Health and Deputy Director for Public Health Service. These officials function largely as technical advisors, working with the centers on various issues but having no oversight responsibility for them. Five OD offices report directly to the Deputy Director for Science and Public Health. No offices report directly to the Deputy Director for Public Health Service. The Senior Advisor for Strategy and Innovation. This advisor is responsible for the agency’s strategic planning efforts and, apart from the official’s own office staff, has no direct reports. Chief of Staff. The Chief of Staff serves as a principal advisor and assistant to the Director and is responsible for OD’s day-to-day management. This responsibility includes routing to the appropriate OD or center official the agency’s incoming inquiries or requests from the Congress, the administration, and the public health community. Two OD offices report directly to the Chief of Staff—the Office of the Executive Secretariat and the Office of Program Planning and Evaluation. Director, CDC Washington Office. This official manages the CDC Washington Office, which acts as a liaison between CDC and its Washington-based stakeholders, which include other agencies, associations, policymakers, and others interested in public health. Senior Advisor to the Director. This advisor is responsible for providing research, analysis, outreach activities, and strategy formulation to meet the needs of the Director and, apart from the official’s own office staff, has no direct reports. Associate Director for Terrorism Preparedness and Response. This official’s responsibilities include managing OD’s Office of Terrorism Preparedness and Emergency Response (OTPER) as well as CDC’s national bioterrorism program. As of November 1, 2003, a total of 20 officials, including the 11 center directors, reported to the CDC Director. (See fig. 5.) Whether this structural arrangement can support effective oversight of the agency’s programmatic work is uncertain, given the growth in the demands on the CDC Director’s time along with the likely change in directors over time. Since the first West Nile virus outbreak in 1999, CDC has responded to a steady stream of high-profile public health emergencies, including the anthrax incidents and the more recent outbreak of SARS. (See fig. 6.) Responding to these events has required the focused attention of the CDC Director. In addition, routine demands on the Director’s time—such as testifying before the Congress, coordinating with HHS officials, and meeting with other national and international public health officials—subtract from the time the Director has to oversee the centers, which perform the core of CDC’s mission work. The typical change in politically appointed agency heads every several years is another factor that makes center oversight solely by the Director a management vulnerability. CDC has had four directors, including the current one, since 1990. While there is nothing uncommon or irregular about such change, it is significant from a management perspective, as agency heads typically need time to acclimate to their new responsibilities and may not stay in office long enough to institutionalize management improvements. Despite the restructuring of OD to reflect agency mission themes, this effort falls short of its intention, owing to a lack of clarity and definition in the roles of the OD deputies. CDC’s Director established five mission themes, or goals—science, strategy, service, systems, and security. The intention was to acknowledge that shared goals cut across the agency’s diverse centers and that viewing the work in this way could foster collaboration. The new OD structure announced in January 2003 aligned executive management positions with each of the themes. (See table 1.) The distinction between the roles of the two deputy positions—Deputy Director for Science and Public Health and Deputy Director for Public Health Service—has not been clearly made. The role of the Deputy Director for Science and Public Health is to serve as OD’s contact point to the centers in areas including agency reports, guidelines and recommendations, and outbreak investigations. However, this deputy’s role is not distinct from that of the Deputy Director for Public Health Service, who serves as OD’s liaison to public health agencies and other external partners as well as OD’s contact point for certain scientific issues, including HIV policies, occupational safety and health policies, injury and violence prevention policies, and programs to address public health disparities. Addressing public health disparities, however, is the mission of CDC’s Office of Minority Health, which reports to the other deputy—the Deputy Director for Science and Public Health. Furthermore, some center officials said that regarding science-related issues involving CDC’s external partners, they were uncertain whether the primary point of contact should be the Deputy Director for Science and Public Health or the Deputy Director for Public Health Service. OD has implemented several changes in its approach to managing the agency’s response to public health emergencies, including the creation within OD of an emergency operations office that, during the SARS outbreak, successfully coordinated the response efforts of CDC’s various centers and staff offices. However, concerns remain about OD’s management of ongoing agency activities, as few systems are in place to provide top agency officials with essential oversight information or to foster collaboration among the centers. In recognition of past problems, OD initiated several structural and procedural changes that improved its ability to oversee the agency’s response to public health emergencies. Specifically, the 2001 anthrax incidents revealed weaknesses in the agency’s ability to coordinate internal response efforts and in its efforts to communicate with the nation’s public health agencies, medical communities, and other external partners—a problem that had also been identified during the response to the first West Nile virus outbreak in 1999. Agency officials and external partners recognized several problems that needed to be addressed: A top OD official we spoke with noted that during the anthrax incidents, the agency leadership lacked formal protocols for making crisis management decisions. This official stated that over 100 staff attended internal information briefings; in this official’s view, the volume and diversity of information presented to agency management at these briefings resulted in “information overload” that impeded timely decision making. An internal CDC document noted that as of October 2001, CDC was running four separate emergency operation centers, resulting in an uncoordinated command and control environment. Prior to September 11, 2001, CDC operated two loosely connected emergency operations centers—one in NCEH and one in ATSDR. After the terrorist attacks on September 11, 2001, CDC established two additional emergency operations centers in the National Center for Infectious Diseases and the Public Health Practice Program Office. The internal document asserted that after the subsequent anthrax incidents, CDC’s multiple emergency operation centers could not provide the agencywide coordinated effort needed to address a crisis. A variety of external partners we spoke with criticized CDC’s response to the anthrax incidents for its failure to quickly communicate vital information to the public and to the health care workers responsible for diagnosing and treating suspected cases. Likewise, we recently reported that although CDC served as the focal point for communicating critical information during the response to the anthrax incidents, it experienced difficulty in managing the voluminous amounts of information coming into the agency and in communicating with public health officials, the media, and the public. A top OD official contended that during the response to the anthrax incidents, the agency would have had difficulty responding to another public health emergency, since key personnel and resources drawn from the various centers and OD staff offices were consumed by this effort. In response to these weaknesses, CDC instituted several organizational changes. In August 2002, CDC created OTPER within OD to be headed by the Associate Director for Terrorism Preparedness and Response, who reports to the CDC Director. The office is responsible for coordinating agencywide preparedness and response efforts among the agency’s centers and its partners. Agency officials told us that the elevation of this responsibility to OD was necessary because of unsuccessful past efforts to ensure coordination among the centers. This office also has responsibility for specific aspects of information systems, training, planning, communications, and preparedness activities designed to facilitate the agency’s emergency response effectiveness. In addition, it provides financial and technical assistance for terrorism preparedness to state, local, and U.S. territorial health departments. In fiscal year 2002, OTPER disbursed about $1 billion in financial assistance to these partners. To improve the agency’s response effectiveness, OTPER developed management decision and information flow models, which outline who will be involved and how the emergency will be handled from strategic, operational, and tactical perspectives. According to the Associate Director for Terrorism Preparedness and Response, these models were used to manage the emergencies involving SARS, monkeypox, and potential terrorist acts associated with the war in Iraq. OTPER also drafted CDC’s national public health strategy for terrorism preparedness and response, including an internal management companion guide on implementation. CDC intends to distribute this document to the agency’s external partners. OTPER manages CDC’s recently constructed emergency operations center, where all aspects of the agency’s emergency response efforts are coordinated. This center is intended to provide a central command-and- control focal point and eliminate the need to coordinate efforts of multiple centers during emergencies. According to the Associate Director for Terrorism Preparedness and Response, the emergency operations center is operational around the clock and has a small number of dedicated staff. In times of emergency, subject matter and communication experts from the centers are temporarily detailed for 3 to 6 months as needed. For example, during the SARS response, individuals from the National Center for Infectious Diseases, the National Institute for Occupational Safety and Health, the Epidemiology Program Office, and the Global Health Office, among others, staffed the emergency operations center and returned to normal duties at predetermined intervals to mitigate any major impact on routine public health work. This logistical approach to staffing and resources was intended to enable CDC to respond to multiple public health emergencies, if needed. Within OD, the Office of Communication works with OTPER to facilitate external communications during public health emergencies. In August 2002, this office established an emergency communication system to enhance CDC’s ability to disseminate timely and reliable information. This system consists of 10 teams that include agency staff from various units who can be called on to act in concert during public health emergencies. Each team has a particular focus—such as media relations, telephone hotline information, Web site updates, and clinician communication. In June 2003, CDC named an Emergency Communication System Coordinator to provide day-to-day oversight of the teams. Despite improvements to crisis management, OD faces challenges in managing its nonemergency public health work. Typically, the attention of OD’s top officials has been focused on emergent public health issues, such as infectious disease outbreaks, leaving little time for focusing on nonemergency public health work and agency operations. OD has also operated in an environment that until recently had not significantly evolved from the time when the agency was smaller and its focus was narrower; outside of routine management meetings, OD’s communication with the centers was largely informal and relied substantially on personal relationships. As a result, the centers have operated with a high degree of independence and latitude in managing their operations. OD has few systems in place with which to track agency operations and programmatic activities. As of summer 2002, OD management officials received only limited management information regularly—monthly reports on budget obligations, a weekly legislative report, a weekly media relations report, and a weekly summary workforce report. Over the past year, OD has taken steps to obtain additional management information and has begun to track some aspects of center operations. As of April 2003, a weekly summary report on congressional activities that supplements the weekly legislative report has been provided to OD management team officials. In fall 2003, OD began compiling a weekly list of selected CDC publications, correspondence, and activities. The COO began monitoring the centers’ travel and training expenditures on an ad hoc basis after conducting a benchmarking analysis on the centers’ fiscal year 2002 expenditures in these areas. Previously, scrutiny of these expenditures was at the discretion of center management. OD has not made similar efforts to monitor the agency’s programmatic work. Outside of routine management meetings with the centers, OD continues to lack formal reporting systems needed to track the status of the centers’ public health programs and develop strategies to mitigate adverse consequences in the event that some activities fall behind schedule. OD relies on its issues management process as one way to stay informed of the centers’ important but nonemergency issues. Historically, the center directors, accustomed to operating autonomously, had little precedent for raising issues for OD management input. In January 2003, OD instituted the issues management process, which, among other things, sought to encourage center officials to elevate significant matters that are not national emergencies but that warrant timely input from the agency’s senior managers. Under this process, a center official seeking management input on an issue of concern contacts OD’s Chief of Staff, who is responsible for coordinating agency input on the issue. The Chief of Staff identifies the appropriate senior officials for handling the concern and tracks actions taken until the matter is concluded. Emerging issues that centers have raised through this process include the agency’s HIV prevention initiatives, preparedness activities for the West Nile virus, and wild animal trade restrictions subsequent to the monkeypox outbreak. According to the Chief of Staff, the issues management process has provided an effective communication channel for the center directors, as it has enabled them to have regular contact with OD management and the CDC Director, as needed. As an effective OD oversight tool, however, the issues management process is incomplete. Under this process, OD has not established formal criteria—in the form of reporting requirements—that would instruct centers on what types of issues warrant management input and the time frames for reporting them. Instead, OD relies largely on the center directors’ discretion to determine which nonemergency public health issues are made known to the agency’s top management. In this regard, the issues management process remains essentially a bottom-up approach to obtaining information on CDC center activities. Coupled with a lack of management reporting systems, this approach places OD in a reactive rather than leadership position with respect to the centers and the public health work they manage. While OD has taken steps to improve the centers’ ability to effectively collaborate during emergencies, more needs to be done for collaboration on nonemergency public health work. The centers have historically not coordinated well on nonemergency public health issues common to multiple centers—a situation we reported on in February 1999. OD officials have also acknowledged that the centers operate as “silos,” characterizing the isolated manner in which these separate but related organizational components operate. OD has taken several steps to foster center collaboration on nonemergency public health work. Conceptually, OD’s emphasis on the five themes—science, service, systems, strategy, and security—is part of an approach to integrate the agency’s public health work across the centers’ respective missions and functions. In August 2003, OD announced the establishment of two governing bodies that encourage center collaboration—the Executive Leadership Team and the Management Council. The Executive Leadership Team, which includes the OD management team and each of the center’s directors, meets biweekly and seeks to ensure that coordination occurs across centers and that the centers’ interests are not omitted when key decisions are being considered by the agency’s top officials. The Management Council, which also meets biweekly, focuses on crosscutting issues involving agency operations, such as information technology. The council is chaired by OD’s COO and is composed of staff office officials and representatives from each of the centers. In providing recommendations to the Executive Leadership Team on agency operations issues, such as the development of performance metrics and the consolidation of the agency’s information technology infrastructure, the council has the opportunity to foster more consistent management practices across the agency. OD officials acknowledged that along with these efforts to promote collaboration, additional initiatives are needed to ensure that collaboration among the centers becomes a standard agency practice. Such efforts by leading organizations to institutionalize collaboration include, for example, the design of cross-functional, or “matrixed,” teams; pay and other incentive programs linked to achieving mission goals; and performance agreements for senior executives that specify fostering collaboration across organizational boundaries. In recent years, CDC’s OD has operated without an up-to-date agencywide planning strategy with which to set agency mission priorities and unify the work of its various centers. In June 2003, OD initiated an agencywide strategic planning process. Shortly before this, in April 2003, OD began developing a human capital plan for current and future staffing priorities, but the plan has been put on hold until the agencywide planning strategy has been established. CDC has a strategic plan that has not been updated since 1994. Consequently, this plan does not reflect the agency’s more recent challenges, such as preparing for terrorism-related events and implementing the civilian portion of the national smallpox vaccination campaign. In the absence of a current long-term strategy, OD has been establishing priorities within its diverse mission through the annual processes for developing the budget and updating goals for the agency’s annual performance report as required by the Government Performance and Results Act of 1993 (GPRA). This method for setting priorities is not effective for long-term planning, as its focus is on funding existing activities one year at a time rather than examining agency goals and performance from a broader perspective. CDC’s need for a comprehensive strategic plan is substantial, as OD must set priorities based on disease prevention and control objectives inherent in the agency’s mission as well as any additional public health priorities of HHS and the Congress. For example, in addition to addressing public health program priorities, such as obesity and diabetes, CDC must also address administration management priorities as directed by HHS. Moreover, the agency must keep a mission focus when coordinating with its external partners—largely, state, local, and international public health agencies. Although CDC relies heavily on these and other external partners to achieve its mission, a mutual understanding of the agency’s priorities may be lacking. For example, some of the state and local public health officials we spoke with were unable to articulate the agency’s top priorities aside from bioterrorism preparedness. CDC officials we spoke with similarly acknowledged the need to better communicate priorities to external partners. Many of the centers have their own mission statements and a few also have strategic plans to address individual center goals and priorities—a reflection of the centers’ independent focus. In the absence of an agencywide plan, however, OD lacks an effective management tool to ensure that the agency’s priorities are being addressed without undue overlap or duplication. In July 2003, participants in preliminary strategic planning discussions acknowledged poor cooperation across centers and the need for improvement in collaboration. In June 2003, OD initiated an agencywide strategic planning process called the Futures Initiative, which is intended to involve all levels of staff and some of the agency’s partners in developing long-range goals and associated performance measures. The agency’s strategic planning efforts will be focused on 10 topics: the public health system, customers’ needs, research capacity, communication and information priorities, future resource needs, government partner relationships, measuring results, intra-agency coordination, programs and grants portfolio, and global health issues. In developing the strategy, OD intends to incorporate the agency’s mission and vision, the federal Healthy People 2010 goals, HHS’s strategic goals and objectives, and selected public health reports. However, at the time of our review, OD had not clearly linked the 10 topics and the agency’s five mission themes of science, strategy, service, systems, and security. To guide and manage the agency’s planning efforts, OD created a steering committee, which is led by the agency’s Director and consists of a small group of senior officials from OD and the centers. This committee makes recommendations to the Executive Leadership Team for decision making. Under the committee, four initial work groups, consisting of center representatives and some external partners, have been established to examine the following topics: customers and partners, health systems, health research, and global health. CDC’s overall strategic planning process has three phases. In the first phase, CDC will evaluate the agency’s overall direction and set priorities. In the second, it will examine the agency’s organizational structure and processes and their alignment with the strategic plan’s goals and begin implementation. The last phase will focus on measuring results and implementing the plan at all agency levels—both management and staff. OD plans to begin implementing the strategy in spring 2004. OD intends to communicate the results of the planning process internally to staff and externally to agency partners through CDC’s Web site and through a variety of meetings and different venues. According to the Senior Advisor for Strategy and Innovation, priority issues and programs identified through the strategic planning process will have goals, action plans, and outcome measures for tracking and accountability. This official also stated that the expected result is that the finished “strategy” will act as a framework for the individual centers to align with and will guide CDC’s priority setting, budget formulation, and annual development of GPRA goals. For OD to effectively lead the agency’s efforts in implementing its long-term strategy, it will be important to link the performance expectations of senior management to the agency’s organizational goals. OD has been operating without a comprehensive human capital plan with which to link workforce needs to agency priorities. The agency has several separate initiatives under way in response to administration directives regarding human capital management. However, in December 2002, HHS criticized these efforts as being overly focused on the centers and lacking an agencywide focus. In April 2003, OD began developing a comprehensive, long-term human capital plan. In July 2003, OD suspended the development of this plan until further progress could be made on the agency’s strategic planning process. As of November 2003, OD had not established a date when the human capital planning would resume nor determined how it would be coordinated with the agency’s strategic planning efforts. Furthermore, CDC is facing several human capital challenges that underscore the need for a strategy to address succession planning, which involves preparing for the loss of key staff and their associated skills. Leading organizations use succession planning and management as a tool that focuses on current and future workforce needs in order to meet their mission over the long term. Our analysis of CDC’s 2003 personnel data showed that—similar to the rest of the federal government—about 30 percent of the agency’s workforce is eligible to retire within the next 5 years. We also found that 33 percent of its senior managers and supervisors will be eligible for retirement within this time frame. Thus, within several years, the agency could potentially lose a key portion of its human capital that possesses both managerial and technical expertise. In addition, by the end of fiscal year 2005, CDC and other HHS agencies are expected to achieve a departmentwide 15 percent reduction in administrative management and support positions. HHS mandated that this reduction not result in the involuntary separations of employees and that affected resources be redirected to programmatic public health work. The implications for CDC are that within a 2-year time frame, CDC must redirect 573 administrative positions from support activities to frontline public health program activities. In some cases, this would involve redirecting administrative staff to program work. However, this will pose a challenge for CDC, as the agency does not maintain a repository of its employees’ skills, which is important to ensure appropriate employee placement. HHS has also directed each of its agencies to assume no growth in the number of FTEs beginning with the fiscal year 2005 budget formulation process and to include a 5 percent FTE reduction option in their budget submissions. OD has taken modest steps toward succession planning. For example, CDC participates in HHS’s program to train and mentor emerging leaders. CDC’s Director has also emphasized the importance of identifying future leaders within the agency and has made this issue a standing agenda item in routine management meetings with center officials. To forecast workforce needs, in August 2002, the agency produced a report of attrition for its offices and centers. Currently, CDC’s managers can access the most recent attrition data by querying a Web-based personnel information system. However, OD is limited in its ability to conduct targeted succession planning or promote greater retention, as it does not track certain key personnel information. For example, although resignations in calendar year 2002 accounted for a higher percentage of the agency’s attrition than retirement (30 percent compared with 20 percent), CDC does not systematically document the reasons for resignations, either through standard “exit interviews” of employees who leave the agency or some other means. This lack of documentation limits OD’s ability to conduct comprehensive workforce planning, which includes strategies for retaining an organization’s workforce for meeting future needs. The considerable succession planning challenges that the agency faces argue for greater OD leadership over human capital planning. Such leadership would be consistent with the effective human capital planning actions of six federal agencies cited in our April 2003 report on this subject. The report noted, among other things, the importance of including human capital leaders in key agency decision making and the establishment and communication of a strategic vision by human capital leaders. Currently, CDC does not have, as envisioned in these reported best practices, a top-level leadership position focused on CDC’s human capital efforts. To better position CDC as it grows and evolves, OD has embarked on a number of changes to improve the agency’s management and planning efforts. While some of these changes have improved the agency’s ability to respond to recent public health emergencies, OD continues to face challenges in overseeing its ongoing, nonemergency public health work. First, a weakness in oversight of the centers exists, as only the CDC Director has line authority over them, and it is uncertain whether this arrangement provides for sufficient top management oversight of the centers’ programs and activities. In addition, the roles of OD’s two deputy directors lack the clarity needed for those seeking the appropriate OD points of contact. Second, OD lacks sufficiently systematic information to track agency operations or the centers’ core public health programs—placing agency management in a reactive rather than leadership position. Despite efforts made to encourage a better information flow between OD and the centers, the reporting of important but nonemergency issues remains largely at the discretion of the centers. Furthermore, efforts to foster collaboration among centers for routine public health work have been made, but little has been done to institutionalize such collaboration and avoid undue overlap or duplication. Third, OD is taking steps to manage the agency strategically, but key planning tools are not fully in place. A recently announced strategic planning process is intended to identify and communicate the agency’s optimal structure, processes, and performance measures. A human capital plan was initiated in April 2003, but this effort has been postponed while the strategic planning process gets under way. As of November 2003, no time frames had been established for resuming the development of the human capital plan or coordinating it with the strategic planning process. The newness of the agency’s strategic planning process and stalled workforce planning efforts argue for greater leadership from OD to continue and coordinate both efforts. To improve OD’s management of CDC’s nonemergency mission priorities, we recommend that the CDC Director take the following three actions: realign and clarify oversight responsibility for the centers’ programmatic work at a level below the Director, including clarifying the roles of OD’s deputy directors; ensure that reporting requirements and tracking systems are developed for OD to routinely monitor the centers’ operations and programmatic activities; and develop incentives to foster center collaboration as a standard agency practice. We also recommend that the CDC Director take the following two actions: ensure that the agency’s new strategic planning process will involve CDC employees and external partners to identify agencywide priorities, align resources with these priorities, and facilitate the coordination of the centers’ mission-related activities and ensure that the agency’s human capital planning efforts receive appropriate leadership attention, including resuming human capital planning, linking these efforts to the agency’s strategic plan, and linking senior executives’ performance contracts with the strategic plan. In its written response to a draft of this report, CDC stated that it is committed to continuing the positive changes we highlighted in the report and agreed that challenges remain—especially for ensuring program accountability. CDC acknowledged that continued oversight from OD is critical to ensure high-quality management practices and scientific excellence. The agency further emphasized that it is in the early stages of a multiyear process of change. CDC stated that ensuring program accountability is a significant challenge that it takes most seriously as stewards of the public’s trust and funding. The agency agreed to evaluate our recommendation to realign and clarify oversight for the centers’ programmatic work at a level below the Director in light of the management changes the agency has already undertaken. CDC also stated that it is working to institute formal reporting requirements and tracking systems that monitor center activities with special emphasis on program outputs, outcomes, and impacts. In addition, CDC stated that it continues to seek ways to strengthen center collaboration. The agency also agreed with our recommendation regarding its strategic planning process and provided information on how it has involved both internal employees and external partners. CDC concurred that human capital planning is critically important and stated that it will link human capital planning and deployment to its strategic plan, and appropriately connect the performance contracts of its senior executives with the developing strategic plan. CDC also provided technical comments, which we incorporated as appropriate. CDC’s written comments are reprinted in appendix II. We are sending copies of this report to the Secretary of HHS. We will also provide copies to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please call me at (202) 512-7101 or Bonnie Anderson at (404) 679-1900. Hannah Fein, Cywandra King, and Julianna Williams also made key contributions to this report. To assess the Centers for Disease Control and Prevention’s (CDC) executive management structure, we analyzed past and current organizational structures and reporting arrangements. We interviewed the agency’s Director about the basis of the management reorganization and the roles of the officials in the Office of the Director’s (OD) management team. We also interviewed the consultant who worked with agency management to help develop the new OD structure. To identify changes resulting from the reorganization, we spoke with past and current OD executive management officials to discuss their roles and responsibilities, and we reviewed the position descriptions for these officials. To ascertain the centers’ understanding of the roles of the OD management team, we interviewed management officials from the following six centers: National Center for Chronic Disease Prevention and Health Promotion; National Center for Environmental Health; National Center for Health Statistics; National Center for Infectious Diseases; National Center for HIV, STD, and TB Prevention; and Public Health Practice Program Office. We also interviewed management officials at the Agency for Toxic Substances and Disease Registry, which functions similarly to CDC’s centers. To assess the demands on the Director’s time, we identified high-profile public health events and emergencies since the first West Nile outbreak in 1999. We also analyzed the Director’s calendar for the 7-month period covering January 1, 2003, through July 27, 2003. To evaluate OD’s approach to managing the agency’s response to public health emergencies, we looked at CDC’s emergency infrastructure and communication processes. To identify changes CDC implemented to improve its performance in this area, we interviewed senior management officials within OD, including the Associate Director for Terrorism Preparedness and Response. We reviewed documentation that included the agency’s decision models, its national public health strategy for terrorism preparedness and response, and information about the Office of Terrorism Preparedness and Emergency Response. We also reviewed documentation about the agency’s past emergency operations centers as well as the recently constructed operations center, including how it is staffed during times of emergency. To learn about CDC’s emergency communication system, we interviewed the Director of the Office of Communication and reviewed pertinent documentation on the various communication teams. We also spoke with some of CDC’s partners to obtain their views on how well the agency communicates during public health emergencies. To assess OD’s approach to managing routine agency operations, we met with OD executive management officials to determine the frequency and types of communications among them. We also met with management officials in six of the centers to discuss the frequency and type of communications between them and OD. To identify the type of management information OD received, we obtained copies of periodic management reports. We also obtained a list of all management meetings, including purpose, attendees, and frequency. We observed several management meetings, including an OD planning meeting, a senior staff meeting, and an issue briefing. We also attended agencywide staff meetings. In addition, we spoke with senior officials of the following OD staff offices: CDC Washington Office; Office of Communication; Financial Management Office; Procurement and Grants Office; Human Resources Management Office; Management Analysis and Services Office; and Office of Program, Planning, and Evaluation. We discussed with these officials the functions of their offices. We met with the Chief of Staff to discuss the issues management process, which the agency uses to manage issues requiring OD’s attention, and its use by agency officials. We obtained documentation of the corresponding issues tracking system as well as a list of issues that have been or are going through the process. To discuss how well the centers collaborate with one another, we met with management officials within OD to obtain their views and to identify steps taken by OD to improve the level of cooperation. We also obtained the views of some of the agency’s partners, who interact with multiple centers. To determine how CDC collaborates with its partners, we interviewed over 30 officials of state and local health departments, health-care-related associations, nonprofit organizations, private industry, schools of public health, and others, such as past CDC directors. We also interviewed the Deputy Director of Public Health Service to discuss how this official interacts with the agency’s partners. In addition, we reviewed relevant documentation, including an internal assessment of CDC’s customer service practices. To identify OD’s approach for setting the agency’s priorities, we interviewed senior management officials within OD and reviewed relevant documentation, including the agency’s 1994 strategic plan. In addition, we spoke with some of the agency’s partners to determine how CDC communicates its priorities to them. To learn about CDC’s recently implemented strategic planning approach, we interviewed CDC’s Senior Advisor for Strategy and Innovation and reviewed extensive documentation regarding this effort. We also attended agency meetings, which introduced the strategic planning process to both CDC staff and some of its advisors. We interviewed officials in CDC’s human resource office to discuss the agency’s workforce planning efforts. We also reviewed relevant documentation, including internal workforce planning reports, reports to the Department of Health and Human Services (HHS), feedback from HHS, and analyses performed by the agency’s contractor for the development of a human capital plan. We obtained and analyzed agency data on overall attrition and retirement eligibility. We also calculated retirement eligibility specifically for management-level staff. We discussed the limitations of the data with the appropriate CDC official and determined that the data were suitable for our use. Furthermore, we analyzed HHS directives that will potentially affect the size and composition of CDC’s workforce and discussed their implications with OD management officials.
The scope of work at the Centers for Disease Control and Prevention (CDC) has evolved since 1946 from a focus on communicable diseases, like malaria, to a wide and complex range of public health responsibilities. The agency's Office of the Director (OD) faces considerable management challenges to ensure that during public health crises the agency's nonemergency but important public health work continues apace. In 2002, the agency's OD began taking steps aimed at organizational change. GAO has observed elsewhere that major change management initiatives can take at least 5 to 7 years. In this report, GAO examined the extent to which organizational changes have helped balance OD's oversight of CDC's emergent and ongoing public health responsibilities. Specifically, GAO examined OD's (1) executive management structure, (2) approach to overseeing the agency's work, and (3) approach to setting the agency's priorities. The management team in CDC's top office--OD--is undergoing a structural change designed to provide a new approach to managing the agency's public health work. Through this effort, CDC has taken steps that have merit. For example, OD established a Chief Operating Officer position with clear oversight authority for the agency's operations units, such as financial management and information technology. However, a significant oversight weakness remains: there is no position or combination of positions on OD's management team below the Director's level to oversee the programs and activities of 11 centers that perform the bulk of the agency's public health work. Only CDC's Director has line authority for the centers, and the extraordinary demands on the Director's time associated with public health emergencies and other external events make the practicality of this oversight arrangement uncertain. Another of OD's structural initiatives was to align OD management team positions with broad mission "themes," or goals, that cut across the centers' institutional boundaries. The intent was to foster among the 11 independent centers a more integrated approach to performing the agency's mission. This purpose may be difficult to realize, however, as connections between certain themes and associated OD positions are not sufficiently clear. OD has made improvements in its ability to oversee the agency's response to public health emergencies--including the creation of an emergency preparedness and response office and the development of an emergency communication system--but concerns remain about OD's oversight of nonemergency public health work. OD's efforts to monitor the activities of the centers are not sufficiently systematic. For example, few formal systems are in place to track the status of centers' operations and programmatic activities. Although OD has a process for center officials to elevate important issues of concern, the information flow under this process is largely center-driven, as the subjects discussed are typically raised at the discretion of the center officials. Similarly, OD's efforts to foster coordination among the centers fall short of institutionalizing collaboration as standard agency practice. The planning tools that OD needs to set agency priorities and address human capital challenges are under development. In recent years, OD has operated without an up-to-date agencywide planning strategy with which to set mission priorities and unify the work of CDC's various centers. In June 2003, OD initiated an agencywide strategic planning process. In a separate planning effort initiated in April 2003, CDC began working on a human capital plan for meeting the agency's current and future staffing needs. This effort has been suspended while the strategic planning process gets under way, and no time frames have been established for resuming its development. At the same time, agency attrition and future limits on workforce growth suggest that agency leadership may be needed to ensure that workforce planning occurs expeditiously.
ANSCA created 12 regional ANCs, each representing a region of Alaska, and a 13th corporation for Alaska Natives living outside Alaska. There are also 182 village, urban, and group corporations located within the 12 regions. In most cases, the regional corporations received a mixture of surface and subsurface rights to land while the village, urban, or group corporations received only surface rights. Some village corporations opted out of the ANCSA settlement to receive surface and subsurface rights to their former reservation lands and relinquished all ANSCA benefits, including claims to additional land, monetary payments, or shares of stock in a regional corporation. Additionally, in some cases, village corporations merged with each other or with the regional corporation. The legislative history of ANSCA is focused on economic development for the benefit of Alaska Natives. Each eligible Alaska Native is generally entitled to membership both in the corporation established for his or her village and in the regional corporation in which the village is located. As shareholders, Alaska Natives are entitled to a voice in the management of and a share in the lands, assets, and income as decided by the board of directors of the corporations, which own and manage the land and money. ANCSA implemented restrictions that generally allow original shareholders to transfer shares only under certain circumstances, such as divorce or through a gift or a will. Additionally, four of the 30 corporations we reviewed have chosen to issue new stock to descendants of the original shareholders or those who did not have the opportunity to enroll as a shareholder originally. ANCs vary widely in number of shareholders and profitability. Table 2 illustrates some examples. For ANC firms in the 8(a) program, SBA has specific oversight responsibility for accepting the firm into the 8(a) program, which includes ensuring that the ANC does not have more than one 8(a) firm in the same primary line of business, defined by a North American Industry Classification System (NAICS) code; verifying each firm’s size status to ensure that it qualifies as small under the NAICS code assigned to the procurement; and annually reviewing 8(a) firms to track their progress in the 8(a) program. There is a 9-year limit to participation in the 8(a) program, and firms— including ANC firms—are required to obtain a certain percentage of non- 8(a) revenue during the last five years to demonstrate their progress in developing a viable business that is not reliant on the 8(a) program. SBA’s district offices are responsible for tracking the business mix of 8(a) and non-8(a) revenue on an annual basis. If a firm does not meet its required business mix during one of the last five years, SBA invokes a plan of remedial action for the next year, in which the firm reports to SBA its progress toward compliance with the required business mix. Until the required mix is demonstrated, the firm will not be eligible for sole-source 8(a) contracts. Currently there are over 9,400 firms in the 8(a) program. From fiscal year 2000 to 2004, the federal government obligated a total of about $4.6 billion to ANC firms, of which $2.9 billion, or 63 percent, went through the 8(a) program. About 13 percent of total 8(a) dollars were obligated to ANC firms in fiscal year 2004. Further, from fiscal year 2000 to 2004, sole-source awards accounted for 77 percent of ANC 8(a) contracts for the six agencies in our trend analysis. The sole-source 8(a) contracts that we reviewed demonstrate the wide diversity of services provided by ANC firms worldwide. Our analysis, based on FPDS data, shows that federal dollars obligated to ANC firms through the 8(a) program grew from $265 million in fiscal year 2000 to $1.1 billion in 2004, with a noticeable increase in 2003. Over the 5-year period, about 63 percent of the government’s obligations to ANC firms went through the 8(a) program. Figure 2 shows the breakdown between 8(a) and non-8(a) dollars obligated to ANC firms. We also analyzed the percentage of total 8(a) dollars obligated to ANC firms from fiscal years 2000 to 2004. Total obligations to all 8(a) firms grew from about $5.8 billion in fiscal year 2000 to about $8.4 billion in fiscal year 2004. The percentage obligated to 8(a) ANC firms grew from about 5 percent to about 13 percent during this time period. Whereas obligations to 8(a) ANC firms decreased only slightly between fiscal years 2003 and 2004, dollars obligated to other 8(a) firms decreased by almost $2 billion during that same time frame. SBA officials could not explain the decrease. Figure 3 depicts this trend. For the six agencies included in our 8(a) trend analysis, sole-source obligations to ANC firms increased from about $180 million in fiscal year 2000 to almost $876 million in fiscal year 2004. Over the five-year period, sole-source obligations represented about 77 percent of these agencies’ total obligations to 8(a) ANC firms. Figure 4 depicts the trend in 8(a) sole-source obligations to ANC firms for the six agencies. In recent years, ANC firms have performed a wide variety of services for the federal government, spanning 18 broad industries, across the United States and overseas. The services included facilities support services; construction; professional, scientific, and technical services; information technology services; and manufacturing. Our review of selected large sole- source 8(a) contracts further demonstrates the wide diversity of services provided by ANC firms, as shown in table 3. Chugach Management Services, Inc. Chugach Management Services, Inc. ASRC Airfield & Range Services, Inc. Ahtna Technical Services, Inc. Ahtna Technical Services, Inc. Field Support Services, Inc. KUK/KBRS Global, a joint venture between Kuk Construction LLC and Kellogg Brown & Root Services, Inc. Washington D.C. Bowhead Information Technology Services, Inc. Washington D.C. Bowhead Support Services, a division of Bowhead Transportation Company, Inc. In general, acquisition officials at the agencies we reviewed told us that the option of using ANC firms under the 8(a) program allows them to quickly, easily, and legally award contracts for any value. They also pointed out that awarding 8(a) contracts to ANC firms helps agencies meet their small business goals. Our review of 16 large sole-source contracts found that contracting officials had not always complied with requirements to notify SBA when modifying contracts, such as increasing the scope of work or the dollar value, and to monitor the percentage of the work performed by 8(a) firms versus their subcontractors. Agency officials told us that awarding sole-source contracts to 8(a) ANC firms is an easy and expedient method of meeting time-sensitive requirements. Some examples follow. An Army contracting official told us that his agency’s limited contracting staff was the primary reason his office awarded an 8(a) sole-source contract to an ANC firm for base operations support. The official added that this contract had been competitively awarded three times previously to large businesses, but in 1999 his office decided it did not have the staff to administer another full and open competition. Another Army official commented that she had to fill an urgent requirement for water and fuel tanks in support of the war in Iraq. Rather than directly award to a large manufacturer, which would require a justification and approval process for a sole-source award, the contract went sole source to an 8(a) ANC firm as a quicker acquisition strategy given the time-sensitive nature of the requirement. An e-mail in the contract file from a NASA official remarked that a sole- source award to an ANC firm would save much time as opposed to having to work through a competitive process, since the office was running short on available staff. Another NASA official stated that the additional resources needed to run a competitive procurement would likely negate any monetary savings that might be gained through competition. Another contracting official told us that it was the “unofficial” policy in his organization that for urgent requirements over the competitive limits for other 8(a) firms, an ANC firm is sought out. He described contracting with ANC firms as an “open checkbook” since sole-source awards can be made for any dollar amount. We found one example, however, where the process of awarding to an 8(a) ANC firm was not particularly expedient. An ANC firm proposed a price for a State Department construction contract that was almost twice as much as the government’s original cost estimate. The State Department negotiated extensively for over a month, requesting four different price proposals from the contractor. At one point, the contracting office considered terminating the solicitation and awarding competitively to a prequalified firm, but due to time constraints the department decided to accept the ANC firm’s final proposal, which was still slightly over the government’s estimate. In another example from our file review, the Interior Department’s GovWorks awarded a sole-source 8(a) contract on behalf of the Department of Defense’s (DOD) Counter Intelligence Field Activity (CIFA) to an ANC firm. The contract was primarily to consolidate and co-locate the space available for contractor personnel, but also included some work to oversee construction and facilities program management. This contractor, which specialized in information technology services, told us it had been approached by CIFA for this project because it had successfully obtained space for another government agency. When awarding the contract, GovWorks did not consider any alternatives other than sole- source contracting with the ANC firm because CIFA had requested that firm. Contractor officials told us that the cost of the office space was incidental to a larger project for CIFA, yet we found that over 80 percent of the contract price was for the space. Furthermore, although SBA’s Alaska district office had accepted the contract under the 8(a) program, a subsequent size determination found that at the time of award, the contractor did not qualify as small under the size standard for the contract. We also found an example where an agency could have competed the contract had there been adequate acquisition planning, but chose to award sole-source to an ANC firm because it was easier method. The Immigration and Naturalization Service awarded a facilities operation and maintenance contract for a federal detention facility. A contracting official who reviewed the presolicitation and pre-award packages told us that this was a recurring requirement and the contracting officer should have known well in advance that the existing contract was expiring. With sufficient acquisition planning the agency could have awarded an 8(a) competitive contract, according to this official. However, he was advised by the contracting officer that awarding to an ANC firm was the quickest and easiest method and avoided competition. We reviewed the contract file and did not find a formal acquisition plan that addressed the strategy used. We reported in 2003 that the lack of adequate advanced planning by the Immigration and Naturalization Service for several detention center contracts limited opportunities for competition. The Small Business Reauthorization Act of 1997 directed the President to establish a goal of not less than 23 percent of the federal government’s prime contracting dollars to be awarded to small businesses each fiscal year. As part of this goal, Congress has directed that 5 percent of prime contract dollars be directed to small, disadvantaged businesses. SBA is charged with working with federal agencies to ensure that agency goals, in the aggregate, meet or exceed these goals. Several contracting officers told us that they had turned to 8(a) ANC contracts as a way to help their agencies meet small business goals. ANC firms in the 8(a) program are deemed in legislation as socially and economically disadvantaged. Because contract awards can be categorized by agencies to allow them to take credit in more than one small business category, awards to 8(a) ANC firms can be applied to the agencies’ overall small business goal as well as to their small, disadvantaged business goal. One Energy contracting official told us that there is tremendous pressure to award contracts to small businesses, so she turns to 8(a) ANC firms whenever possible. A NASA official told us that his contracting office had been aggressive in promoting socioeconomic development with small disadvantaged businesses and had particularly wanted to award a contract to benefit the Native American community. Although several small businesses expressed interest in NASA’s requirement for technical and fabrication support services, rather than compete the procurement, NASA opted for a sole-source award with an 8(a) ANC firm. SBA regulation requires that, where the contract execution function is delegated to the agencies, they must report to SBA all 8(a) contract awards, modifications, and options. Further, the MOUs between SBA and the agencies require the agencies to provide SBA with copies of all 8(a) contracts, including modifications, within 15 days of the date of award. However, we found that contracting officers were not consistently following this requirement. While some had notified SBA when incorporating additional services into the contract or when modifying the contract ceiling amount, others had not. One contracting official told us that SBA has “stepped aside” when it comes to overseeing 8(a) contracts and that it would not occur to her to coordinate a contract modification, such as a scope change, with SBA. We also found the following example where the contracting officer was under the impression that the scope of work could be expanded to include any additional lines of business not in the original contract because it was a sole-source 8(a) ANC contract. The Department of Energy awarded an $8.5 million sole-source contract to an ANC firm for administrative and general management consulting services, but one year later broadened the scope of work to include 10 additional lines of business related to facilities management support and engineering services. The additional work almost tripled the cost of the contract, raising it to $25 million. None of these changes were coordinated with SBA, despite the fact that SBA’s letter to the Department of Energy approving the procurement clearly stated that if the statement of work was changed, SBA would have to re-determine the appropriateness of the NAICS code and the acceptability of the offer under the 8(a) program. The contracting official acknowledged that the scope change should have been coordinated with SBA, but her understanding was that because it was an ANC firm, anything could be added to the contract regardless of the dollar amount. By adding additional lines of business to the contract, the contracting officer was potentially improperly expanding beyond the scope of the contract. Moreover, by not notifying SBA, the agency had no assurance that this ANC firm qualified as small under the contract’s additional lines of business. We found that SBA’s letters to the agencies approving 8(a) procurements did not always reiterate the notification requirement. Of the 16 contract files we reviewed, we found only five cases where the letter requested that all contract modifications be coordinated with SBA. Four of these specifically requested the agency to forward a copy of any scope changes. SBA officials could not explain why the acceptance letters were inconsistent. SBA officials in Alaska recently revised their approval letter template, which now requests copies of contract modifications if additional work is being added to the original contract or an option year is being exercised. The “limitations on subcontracting” clause in the Federal Acquisition Regulation requires that, for 8(a) service contracts with subcontracting activity, the 8(a) firm must incur at least 50 percent of the personnel costs with its own employees (for general construction contracts, the firm must incur at least 15 percent of the personnel costs). The purpose of this provision, which limits the amount of work that can be performed by the subcontractor, is to insure that small businesses do not pass along the benefits of their contracts to their subcontractors. For the 16 files we reviewed, we found almost no evidence that the agencies are effectively monitoring compliance with this requirement, particularly where 8(a) ANC firms have partnered with large firms. As a result, there is an increased risk that an inappropriate degree of the work is being done by large businesses rather than by the ANC firms. The procuring agency and the 8(a) firm both play a role in ensuring compliance with the limitations on subcontracting clause. The MOUs between SBA and the procuring agencies state that the agencies are responsible for the monitoring. SBA’s regulation requires the 8(a) firms to certify in their offers to the appropriate SBA district office that they will meet the applicable percentage of work requirement for each contract when subcontracting. In general, the contracting officers we spoke with were confused about whose responsibility it is to monitor compliance with the subcontracting limitations. Some thought it was SBA’s responsibility; one asserted that the contractor was responsible for self-monitoring; and others acknowledged that it was their responsibility but were not monitoring it formally. For the contracts in our file review, SBA’s letters to agencies approving the 8(a) procurements were not consistent in reminding contracting officers to include the limitations on subcontracting clause in the contract. Six of the letters did not include this language. We brought this discrepancy to the attention of SBA officials, who stated that all approval letters should contain this requirement as standard language. In addition, we found that two of the awarded contracts did not contain the limitation on subcontracting clause, as required. The responsible contracting officials told us the clause should have been included and was omitted as a result of an oversight. We also found that contracting officers were unclear about how to monitor the subcontracting requirements under indefinite quantity contracts, under which agencies place task or delivery orders. SBA’s 8(a) regulation states that for indefinite quantity service or supply contracts, the participant must demonstrate semi-annually whether it has performed 50 percent of personnel costs with its own employees for the combined total of all task or delivery orders at the end of each 6-month period. This does not mean that the 50-percent minimum requirement applies to work performed under each individual task order or that a contractor must meet the requirement cumulatively for all work performed under all task orders at any given point in time. We found contracting officers who misinterpreted the regulation to mean that the contractor must perform the required percentage over the life of the entire contract. As a result, one contracting officer decided it was too difficult and thus did not monitor the subcontracting effort. In one example from our file review, the Energy Department awarded a sole-source indefinite quantity contract for a construction project to an 8(a) ANC firm primarily because this firm had a previous business relationship with the large incumbent contractor and planned to use the incumbent as a subcontractor for the new contract. The contracting officer believed that the limitations on subcontracting must be demonstrated by the end of the entire contract period. We reviewed an invoice that showed that cumulatively for all tasks to date, the subcontract labor costs made up 90 percent of the total labor, which would indicate the need for attention to be paid to the 6-month task order review requirement. An agency contracting official told us that it is not uncommon for large businesses to approach him wanting to know how to “partner” with an ANC firm. Furthermore, representatives from one ANC firm told us that an agency had awarded it a “pass-through” contract, or one where the subcontractor performs most of the work, to take advantage of the 8(a) ANC firm’s ability to obtain sole-source contracts. The agency wanted to contract with a particular business, but could not award a sole-source contract directly to that business. The agency awarded the contract to the ANC firm and required it, through a directed subcontracting plan, to subcontract with the desired business. When asked what recourse contracting officers would take if they found an 8(a) firm to be out of compliance with the limitations on subcontracting, some agency officials responded that they had no plan in place. In fact, one contracting officer commented that he would be “laughed out of the office” if he brought up the compliance issue as a reason for terminating the contract. Several contracting officials told us that they review the cost proposals to assess how much work was planned to be subcontracted out, but they do not follow up during contract performance to ensure that the prime contractor complies with the plan. In one case, we found that an 8(a) ANC firm’s technical proposal to the Department of Transportation for an information technology consolidation project included an intention to subcontract with a large firm, yet did not clearly delineate the breakout of work between the firms. From reviewing the agency’s evaluation of the proposal, we did not find any evidence that contracting officials questioned the relationship or the division of labor prior to contract award. Later, however, the contracting officer modified the contract to require the 8(a) firm to provide semi-annual subcontracting reports that would detail the subcontracting percentage for the previous 6 months. ANCs use the 8(a) program as one of many tools to generate revenue with the goal of providing benefits to their shareholders. ANCs participating in the 8(a) program have various business strategies to maximize revenue. For example, some own multiple 8(a) subsidiaries, either in niche markets or diversified industries. Others recruit outside expertise to manage their 8(a) operations. Additionally, many form partnerships—with other ANCs or other businesses—and holding companies for increased efficiencies. Federal contracts awarded through the 8(a) program are one of a number of sources of revenue, such as timber, tourism, real estate, or market investments, for ANCs participating in the 8(a) program. Corporations consolidate their income to fund operations at the parent level, to invest in subsidiary operations, and to provide benefits to shareholders. Figure 5 shows a sample ANC’s revenue sources. Some corporations rely on federal contracting with 8(a) subsidiaries as a primary revenue source, while others do not. For example, of the five corporations whose subsidiaries comprised 76 percent of the government’s 8(a) ANC dollars from fiscal years 2000 to 2004, three depend almost exclusively on current, exited, or planned participants in the 8(a) program for their revenues. However, for the other two corporations, 8(a) subsidiaries are only one investment in a diversified portfolio that includes energy services, telecommunications, and oil-field and mining support. We also interviewed four corporations that do not participate in the 8(a) program, relying instead on telecommunications, real estate, tourism, natural resources, and other investments. The ANCs we reviewed do not track the benefits provided to their shareholders specifically generated from 8(a) activity. Thus, an explicit link between the revenues generated from the 8(a) program and benefits provided to shareholders is not documented. However, ANCs do track benefits generated from their consolidated revenue sources. Benefits vary among corporations, but include dividend payments, scholarships, internships, burial assistance, land gifting or leasing, shareholder hire, cultural programs, and support of the subsistence lifestyle. For more information on benefits, see appendix X. We found that sizable 8(a) revenues do not guarantee a higher level of shareholder benefits, as two of the five ANCs that account for most of the 8(a) ANC dollars obligated from fiscal years 2000 to 2004 demonstrate. One corporation, which provides sizable benefits, credits the 8(a) program with its continued existence, its return to profitability after declaring bankruptcy, and its ability to provide monetary benefits. In the early 1990s, the corporation was required to pay off its debts before paying any dividends. Its board and management attribute its return to profitability to its heavy participation in the 8(a) program. By 2004, the ANC paid out dividend amounts that were among the highest of all regional corporations. An original shareholder owning 100 shares, for example, received $3,500 in dividends in 2004. The ANC also provided a number of other benefits to its shareholders, their spouses and descendants, such as scholarships and a business assistance program. In contrast, another ANC with a high level of activity in the 8(a) program is currently unable to provide a comparable level of monetary benefits. This corporation encountered a few years of heavy losses due to lawsuits and management malfeasance. Since being in financial recovery for the past 5 years, it has not been allowed to issue dividends to shareholders. However, it provides other benefits, such as scholarships and protection of land and subsistence rights for its shareholders. We also found that a high level of benefits can exist even if an ANC is not participating in the 8(a) program at all. For example, at the time of our review, one regional corporation received all of its revenues from its diverse non-8(a) investments, including real estate, natural resources, telecommunications, tourism, golf resorts, casino gaming, construction, and oil-field services. From 2000 to 2004, this corporation provided dividend payments that were substantially higher than any others we reviewed and also provided a number of additional types of benefits to its shareholders. To generate revenue, many ANCs own multiple businesses in the 8(a) program, taking advantage of their special ability to do so. Many of the subsidiaries have offices that are located outside of Alaska, which is not prohibited by statute or regulation. As Figure 6 demonstrates, the number of 8(a) ANC subsidiaries has increased markedly. As of December 2005, 49 ANCs owned a total of 154 8(a) firms and 30 ANCs owned more than one 8(a) firm. See appendix IX for a listing of these 49 ANCs. The corporation owning the most subsidiaries had a total of 14 active or graduated 8(a) subsidiaries. The five corporations that represented the largest volume of 8(a) ANC dollars from 2000 to 2004 owned a total of 45 active and exited 8(a) subsidiaries, or 24 percent of the total. Regional corporations have been more active than the village and urban corporations in forming multiple subsidiaries. SBA’s 8(a) regulation requires that the subsidiaries of each ANC be certified in the 8(a) program under a different primary NAICS code, representing different lines of business. However, the 8(a) businesses can pursue work in an unlimited number of secondary NAICS codes, regardless of their primary line of work declared at the time they apply to the 8(a) program. This means that an 8(a) subsidiary of an ANC may pursue government contracts under any of its primary or secondary NAICS codes, including those that overlap with the secondary NAICS code of another 8(a) subsidiary owned by the same parent corporation. ANCs use their ability to own multiple businesses in the 8(a) program, as allowed by law, in different ways. The following table summarizes some of the practices we identified in our interviews with ANCs and our review of their documentation. According to SBA data, 36 ANC firms exited the 8(a) program from 1998 through 2005. Eleven subsidiaries exited because they completed their 9-year term in the program. The remaining 25 subsidiaries exited the program before completing the full 9-year term. Of these, seven graduated early from the program after exceeding SBA’s size standards for revenue or number of employees. Though no longer 8(a) participants, these subsidiaries are obligated to continue to perform work on previously awarded 8(a) contracts, including any priced options that may be exercised. Another subsidiary lost its 8(a) status after failing to file paperwork with SBA. Other subsidiaries dissolved, became inactive, or were sold to other businesses. We found a variety of other strategies that ANCs use to generate revenue, as discussed below. Although all of the ANCs that we reviewed retained a board composed entirely of Alaska Natives, several have recruited outside executives who are not Alaska Natives to manage the parent corporation or their 8(a) operations. Some corporations recruited these executives for their specific experience in the 8(a) program, which they gained working on other government contracts or in operations at other 8(a) ANC subsidiaries. Some corporation executives stated that this managerial expertise was a key factor to success in the 8(a) program. For example, representatives from one corporation told us that its 8(a) subsidiary suffered after its executive left to work at another ANC. Some of these managers command salaries significantly higher than those of the executives at the parent corporation. For example, in 2004, a corporation paid one of its chief executive officers for 8(a) operations almost $1 million — more than three times as much as the highest-paid executive of the parent corporation. Additionally, a few ANCs hire outside marketing firms to assist them with securing contracts. One such firm provides services such as locating potential contracts for its ANC client, interviewing potential partners on the project, meeting with contracting agencies, and following up with the contracting officer after award. Another business strategy is to create partnerships with individuals or other businesses to gain access to capital, experience, or expertise. For example, one corporation entered into a partnership by sharing subsidiary ownership with another ANC when it did not have the necessary capital to create a new subsidiary. The other corporation benefited from the partnership because it was new to the 8(a) program and needed the other corporation’s experience. In addition to ownership arrangements, many ANCs pursue other types of partnerships, such as joint ventures and mentor-protégé relationships, as a business strategy to better position themselves for federal contract opportunities through the 8(a) program. Joint venture agreements. A “joint venture” is an agreement between an 8(a) participant and one or more businesses to work together on a specific 8(a) contract. With SBA’s approval, an 8(a) subsidiary may enter into an unlimited number of joint venture agreements. Of the 26 corporations we interviewed that were participating in the 8(a) program, 22 owned subsidiaries that participated in a total of 57 joint venture agreements. In 2001, a joint venture between two ANCs was awarded a $2.1 billion contract by the National Imagery and Mapping Agency. Mentor-protégé agreements. SBA established the mentor-protégé program to encourage relationships between 8(a) businesses and other firms that act as mentors to provide technical, financial, and other assistance to their protégés. An 8(a) subsidiary may be a protégé to only one mentor at a time. Of the ANCs that we interviewed that were participating in the 8(a) program, 19 owned a total of 24 subsidiaries participating in mentor- protégé agreements. ANCs create holding companies – non-8(a) subsidiaries that provide shared administrative services to other subsidiaries, for a fee – which also aid their participation in the 8(a) program. Of the 30 corporations we interviewed, 11 had formed holding companies. Two corporations had established three separate holding companies. Figure 7 shows a sample ANC with a holding company for subsidiaries in and outside of the 8(a) program. SBA requests that ANCs seek approval before forming a holding company, which must be wholly-owned by the parent ANC for the subsidiaries to be eligible for the 8(a) program. During the course of our review, we found one holding company that was 80-percent owned by the parent ANC and 20-percent owned by two holding company executives. SBA’s records, however, showed the company as 100-percent owned by the parent ANC. A representative of the holding company told us that the ownership arrangement was changed after SBA’s initial approval of the holding company. The company did not notify SBA of the change because the holding company is not itself a participant in the 8(a) program and it wholly owns all of its subsidiaries, thereby maintaining compliance with the minimum 51-percent ownership requirement. SBA points to the statute and its regulations, which show that ANC 8(a) participants must be majority-owned by an ANC or a wholly-owned entity of an ANC. Therefore, subsidiaries under a partially-owned holding company are no longer eligible to participate in the 8(a) program. Since this situation came to light, the ANC and the holding company executives rescinded the 20-percent ownership arrangement to maintain compliance with SBA requirements. Further, the SBA Alaska district office revised its template letter approving a change in ownership to clarify the restrictions on ownership of a holding company. ANC executives told us the benefits of holding companies included: Greater efficiencies. The holding companies can provide accounting, human resources, legal, marketing, or other services, allowing the ANC to operate more efficiently. Since subsidiaries underneath the holding company do not need to perform these functions, they may employ fewer administrative staff and instead employ only technical staff. A lean staff is especially important since subsidiaries can become ineligible for the 8(a) program when they exceed a certain number of employees. Consistent policies and procedures. Some corporations established holding companies to facilitate consistent policies, procedures, and corporate governance across the subsidiaries. Easier administration. Corporation officials cited several administrative benefits to establishing holding companies, including the following examples: The holding company’s smaller board allowed for faster decisions than assembling the parent corporation’s entire board. Only one entity—the holding company—would be audited by the Defense Contract Audit Agency as opposed to each of the individual subsidiaries. The holding company saved time on security clearances. For example, for a contract involving classified work, the holding company management and board of directors already had security clearances, saving the time of performing background checks on the corporation- level management and board of directors. Coordination among subsidiaries. One corporation official told us that the holding company helps prevent competition among its subsidiaries for the same contracting opportunities. Legal protection. Representatives from two corporations stated that the holding company separates the parent company from most liability that a subsidiary may incur. For example, if the subsidiary went bankrupt, the parent corporation generally could not be held legally or financially responsible. SBA has not tailored its policies and practices to account for ANCs’ unique status in the 8(a) program and growth in federal contracting, even though officials recognize that ANCs enter into more complex business relationships than other 8(a) participants. SBA officials told us that they have faced a challenge in overseeing the activity of the 8(a) ANC firms because ANCs’ charter under ANCSA is not always consistent with the business development intent of the 8(a) program. The officials noted that the goal of ANCs—economic development for Alaska Natives from a community standpoint—can be in conflict with the primary purpose of the 8(a) program, which is business development for individual small, disadvantaged businesses. However, the officials agreed that improvements are needed in their oversight and said they are considering various actions in this regard. They told us that they are planning to revise their regulations and policies to address ANCs’ unique status in the 8(a) program. Moreover, they are now in the process of implementing a new, automated data collection tool to more readily collect information on 8(a) firms. It is expected to be operational during fiscal year 2007. SBA’s oversight has fallen short in that it does not track the business industries in which ANC subsidiaries have 8(a) contracts to ensure that more than one subsidiary of the same ANC is not generating the majority of its revenue under the same primary NAICS code; consistently determine whether other small businesses are losing contracting opportunities when large, sole-source contracts are awarded to 8(a) ANC firms; adhere to a legislative and regulatory requirement to ascertain whether 8(a) ANC firms, when entering the 8(a) program or for each contract award, have, or are likely to have, a substantial unfair competitive advantage within an industry; ensure that partnerships between 8(a) ANC firms and large firms are functioning in the way they were intended under the 8(a) program; and maintain information on ANC 8(a) activity. SBA officials from the Alaska district office reported to headquarters in the most recent quality service review that the make-up of their 8(a) portfolio is challenging and requires more contracting knowledge and business savvy than usual because the majority of the firms they oversee are owned by ANCs and tribal entities. The officials commented that these firms tend to pursue complex business relationships and tend to be awarded large and often complex contracts. We found that the district office officials were having difficulty managing their large volume and the unique type of work in their 8(a) portfolio. When we began our review, SBA headquarters officials responsible for overseeing the 8(a) program did not seem aware of the growth in the ANC 8(a) portfolio and had not taken steps to address the increased volume of work in their Alaska office. As discussed above, ANCs can create multiple 8(a) subsidiaries that can be based across the United States. SBA’s Alaska district office, which is responsible for overseeing most 8(a) ANC contracting activity, does not track the business industries in which the subsidiaries win 8(a) contracts under secondary NAICS codes. Thus, SBA is not ensuring that a firm’s secondary NAICS codes do not, in effect, become the primary business line by generating the majority of revenue. This situation could allow an ANC to have more than one 8(a) subsidiary perform most of its work under the same primary NAICS code, which SBA regulation does not allow. Appendix XI shows an example of an ANC with subsidiaries marketing their ability to perform work in a number of different industries. Headquarters officials told us that they do not monitor the industries from which 8(a) participants receive revenue because they do not want to stifle the growth of the company. However, the officials acknowledged that they would be concerned if a subsidiary’s primary industry revenue source changed without SBA being notified. They have not developed a plan to increase monitoring of ANCs’ secondary NAICS codes, even though many of these firms take advantage of their ability to obtain contracts under secondary lines of business. We found cases where SBA did not take action when incumbent small businesses lost contract opportunities when an 8(a) ANC firm was awarded a large sole-source contract. For example: The Department of Transportation awarded an information technology contract to an 8(a) ANC firm in an effort to support transition to a single integrated infrastructure. According to the department’s acquisition plan, the goal is to create a more mission-effective, secure, and cost-effective computing environment that will provide common services. Previously, this service was being provided under separate contracts with eight small businesses. The consolidation project will likely discontinue the work performed by these small businesses and replace it with the single infrastructure managed by the 8(a) ANC firm. One of the incumbent small businesses protested the award to our agency. In its submission to our bid protest office, SBA acknowledged that it had not conducted the required adverse impact analysis, but asserted that it had viewed the requirement as “new” and therefore had incorrectly concluded it was not required to perform the analysis. SBA also noted that the 8(a) regulation provides that, even where there is a presumption of adverse impact, SBA “may”—rather than “shall”— determine whether adverse impact exists. SBA interprets this to mean that it has the discretion to accept a contract into the 8(a) program even where one of the contractors meets the presumption of adverse impact. The scope of an Air Force base contract with an ANC firm has been expanded as additional base civil engineering services, previously provided by small businesses, have been absorbed into the contract. Since the initial contract award, the estimated contract value has increased by $46 million to nearly $600 million. The contracting official coordinated these changes with SBA via e-mail. Rather than disapproving the request or evaluating the impact on other small businesses, SBA only expressed concern that the contracting officers were absorbing work into the contract that was well within the capability of other 8(a) contractors, indicating that it was “troubled” over the loss of a prime contracting opportunity for other small businesses. The contracting officer told us that the Air Force has now decided to stop adding services to the contract and will maintain the other existing small business contracts. When a procuring agency is interested in offering a requirement to a specific participant in the 8(a) program for a sole-source contract, the agency is required to send SBA an offering letter with information on the description of the work, the NAICS code, anticipated dollar value of the requirement, and the names and addresses of any small business contractors that have performed on the requirement during the previous 24 months, among other things. At the time that SBA accepts a procurement for award into the 8(a) program, it is required to consider whether individual small businesses, a group of small businesses in a geographical area, or other business programs will be adversely impacted. Adverse impact is determined to be present where, among other things, a small business has been performing the requirement outside the 8(a) program and this work represents 25 percent or more of its revenue. In almost all cases for the 16 large sole-source contract we reviewed, SBA’s letters to the agencies approving the procurements contained boilerplate language: “a determination has been made that acceptance of this procurement will cause no adverse impact on another small business concern.” The language in the acceptance letters suggests that SBA conducted a formal adverse impact study, yet this was not the case for any of the contracts we reviewed. The letters do not clarify whether the determination was made based on a formal adverse impact study or whether no determination was required because the requirement was new or previously had been performed by a large business. SBA officials told us that the language is intended to encompass all situations where there is no adverse impact. SBA officials stated that it is difficult for them to ensure that other small businesses are not negatively affected because they are relying on the procuring agency to provide the procurement history, and, in their view, procuring agencies are not always forthcoming. During our review, the Alaska district office revised its standard letter to agencies to state that the adverse impact determination was made based on the procurement history the agency provided to SBA in its letter offering the procurement to the 8(a) program. The letter also now states that the determination that acceptance of the procurement will cause no adverse impact on another small business was made on the basis of the agency’s identifying the requirement as new or not identifying an incumbent contractor. In determining the size of a small business concern owned by a socially and economically disadvantaged Indian tribe (or wholly owned business entity of such tribe) each firm’s size shall be independently determined without regard to its affiliation with the tribe, any entity of the tribal government, or any other business enterprise owned by the tribe, unless the Administrator determines that one or more such tribally owned business concerns have obtained, or are likely to obtain, a substantial unfair competitive advantage within an industry category. SBA has incorporated this language into its 8(a) regulation, but is not making the determinations that these business concerns have obtained, or are likely to obtain, a substantial unfair competitive advantage. In fact, the agency has no procedure in place to make these determinations. Officials told us that the language in the statute is confusing and that they are not sure how to implement it. They had not taken steps to obtain clarification and make any needed revisions to the 8(a) regulation or their standard operating procedures. SBA officials noted that the amount of participation by ANCs in the federal contracting market is so minimal when compared to all other businesses that they do not expect an ANC would have a substantial unfair competitive advantage in one industry. SBA is required to approve partnerships between 8(a) and other firms, such as mentor-protégé and joint venture arrangements, to ensure the agreements are fair and equitable and will be of substantial benefit to the 8(a) concern. Where SBA concludes that an 8(a) concern brings very little to the joint venture relationship in terms of resources and expertise other than its 8(a) status, SBA regulations state that SBA will not approve the joint venture agreement. SBA officials told us that they work closely with the partnership firms to ensure that the 8(a) company has control in the joint venture and will be gaining from the relationship. Further, SBA’s regulations state that SBA will not approve a mentor-protégé relationship that it determines is merely a vehicle to enable a non-8(a) participant to receive 8(a) contracts. We found indications that oversight of these partnership relationships, particularly in the context of ANCs’ unique provisions and large businesses that want to take advantage of those provisions, may not be adequate. For example, representatives from an ANC firm told us that its mentor firm exploited it for its 8(a) status. In pursuit of a particular contract, the Alaska-based subsidiary invested in an office and staff in Arkansas at the advice of its mentor. When the contract was not won, the mentor deserted the protégé, and the subsidiary was left to search for federal work on its own in Arkansas. ANC firms in the 8(a) program provide information to SBA on their partnership arrangements as part of the annual review process, and SBA is reliant on this information to assess the partnerships’ success. Therefore, SBA may not obtain all necessary information to determine if the partnership is working as intended, even though SBA has primary responsibility to monitor these arrangements. We found examples where the procuring agency had concerns about a partnership situation, but did not report its concerns to SBA, nor did SBA ever inquire whether the partnership was working as intended. A State Department program official told us that his office had good intentions when it identified a joint venture between an 8(a) ANC firm and a large firm for a sole-source 8(a) award of an international construction services contract. In line with the business development aspect of SBA’s mentor protégé program, the State Department official had envisioned that the ANC firm would gain construction experience from the globally recognized larger partner and then compete on its own for other construction work at the State Department. However, the official, who was also the contracting officer representative, expressed concern that all the actual construction work was being subcontracted out and the joint venture was only doing construction management, which was not the intent when the requirement was offered to the 8(a) program. Moreover, in an e-mail to the contracting officer, this official suggested that the contractor had some performance problems and may have been circumventing the prices negotiated in the contract by using subcontracts for all the work. The program official never made these concerns known to SBA, nor did SBA ever inquire whether the partnership was working as intended. According to State Department officials, the contracting officer looked into the matter and found the concerns were unfounded. In another example at the State Department, officials had some concerns that the 8(a) ANC firm was a front company for the large business in a joint venture for another construction project. In response to the concerns, representatives from the joint venture presented information to State officials on the role of the ANC firm, stating that it was involved with management from top to bottom and that the large firm would provide construction expertise where needed. We found no evidence that State officials contacted SBA about this issue at the time. SBA recognizes that the mentor-protégé aspect of the 8(a) program can be an important component of the overall business development of small businesses. However, officials believe that joint ventures between mentors and their protégés may be inappropriate for 8(a) sole-source contracts above competitive thresholds set for other 8(a) firms. SBA cites complaints that non-8(a) firms have received substantial benefits through the performance of large sole-source 8(a) contracts as joint venture partners with tribally-owned and 8(a) ANC firms. Further, where the joint venture involves a large business mentor, SBA recognizes a perception that large businesses may be unduly benefiting from the 8(a) program. SBA lacks adequate data regarding the 8(a) program in general and does not collect any information on ANCs’ 8(a) activity. SBA could not provide us with reliable data for ANC revenues in the 8(a) program, even though all program participants are required to report this information annually. An SBA official explained that the district offices stopped using the database that collects this information and therefore the agency had no recent data on 8(a) participants’ revenues. Overall, data on ANC 8(a) contracting activity were not readily available. There is no mechanism in place for agencies to code 8(a) awards to ANCs in FPDS, for example. The complex nature of some ANCs’ 8(a) business practices, combined with the competing ANCSA and 8(a) program goals of economic development for Alaska Natives versus development of individual small businesses, create the need for SBA to tailor its regulations and policies as well as to provide greater oversight in practice. Furthermore, since agencies can contract directly with ANC firms, they too have responsibility to ensure that these firms are operating in the program as intended. Without this level of oversight, there is clearly the potential for unintended consequences or abuse. We recommend that the Administrator of SBA take the following five actions when revising relevant regulations and policies: Ascertain and then clearly articulate in regulation how SBA will comply with existing law to determine whether and when one or more ANC firms are obtaining, or are likely to obtain, a substantial unfair competitive advantage in an industry. In regulation, specifically address SBA’s role in monitoring ownership of ANC holding companies that manage 8(a) operations to ensure that the companies are wholly owned by the ANC and that any changes in ownership are reported to SBA. Collect information on ANCs’ 8(a) participation as part of required overall 8(a) monitoring, to include tracking the primary revenue generators for 8(a) ANC firms to ensure that multiple subsidiaries under one ANC are not generating their revenue in the same primary industry. Revisit regulation that requires agencies to notify SBA of all contract modifications and consider establishing thresholds for notification, such as when new NAICS codes are added to the contract or there is a certain percentage increase in the dollar value of the contract. Once notification criteria are determined, provide guidance to the agencies on when to notify SBA of contract modifications and scope changes. Consistently determine whether other small businesses are losing contracting opportunities when awarding contracts through the 8(a) program to ANC firms. We also recommend that the Administrator of SBA take the following five actions to improve practices pertaining to SBA’s oversight. Standardize approval letters for each 8(a) procurement to clearly assign accountability for monitoring of subcontracting and for notifying SBA of contract modifications. Tailor wording in approval letters to explain the basis for adverse impact determinations. Clarify MOUs with procuring agencies to state that it is the agency contracting officer’s responsibility to monitor compliance with the limitation on subcontracting clause. Evaluate staffing levels and training needed to effectively oversee ANC participation in the 8(a) program and take steps to allocate appropriate resources to the Alaska district office. Provide more training to agencies on the 8(a) program, specifically including a component on ANC 8(a) participants. To ensure that agencies are properly overseeing ANC 8(a) contracts, we recommend that the Secretaries of the Departments of Defense, Energy, Homeland Security, the Interior, State, and Transportation and the Administrator of NASA take the following action: Work with SBA to develop guidance to agency contracting officers on how to comply with requirements of the 8(a) program such as limitations on subcontracting and notifying SBA of contract modifications, particularly when contracting with 8(a) ANC firms. We provided a draft of this report to the departments of Defense, Energy, Homeland Security, Interior, State, and Transportation and to NASA and SBA. We received written comments from SBA, Homeland Security, the Interior, NASA, State, and Energy. We received official oral comments from Defense and Transportation. We also received written comments from the Native American Contractors Association. The written comments we received are included as appendixes II through VIII. In its written comments, SBA took issue with several aspects of the report. Its letter did not indicate whether or not it plans to implement the recommendations we made, but in a subsequent email the agency expressed disagreement with several of them. SBA’s comments and our views on them follow. The agency referred to the concerns we raise as “subjective” and stated that our analysis relies “far too heavily on isolated individual anecdotes” to support findings and recommendations pertaining to 8(a) ANC activity. We strongly disagree with this characterization. Our findings are supported by the facts we gathered and our analysis of regulations, policies, contract files, ANC annual reports, FPDS and agency data, and other relevant documentation, as well as interviews with agency contracting officers and acquisition officials, SBA officials in headquarters and the Alaska district office, and representatives of 30 ANCs. The findings we developed and the shortcomings in oversight we found directly support the 10 recommendations we make to SBA. Further, it is an undisputed fact that there has been significant growth in federal dollars awarded to 8(a) ANC firms in recent years, as recognized by SBA in its comment letter. Clearly, 6 of the 7 procuring agencies in our review--which account for most of the government’s 8(a) dollars to ANC firms--agree that there is a need for them to work with SBA to develop guidance for contracting officers in light of the unique procurement advantages Congress has provided 8(a) ANC firms. SBA believes that our report should cite federal dollars to women- owned and other small business categories and the government’s achievement of small business goals in general. That information is not relevant to this report. Our review focused specifically on ANC activity in the 8(a) program, as set forth in appendix I, which outlines our scope and methodology. SBA states that it has recently taken a number of steps to improve oversight of the 8(a) program, including taking into consideration special provisions afforded to 8(a) ANC firms, Native Hawaiian Organizations, and Indian tribes. It is unclear what steps SBA is referring to. While we note in our report that SBA officials told us they were planning to revise regulations and policies, we were not provided with any evidence that this or any other planned action had been taken, despite our requests for the information. SBA states that it is “conjecture” to make recommendations pertaining to data on 8(a) ANC activity until the lack of data explaining 8(a) participants’ economic activities, including ANC firms, is resolved. Our recommendation on data collection is intended to address this very gap. It is directed at SBA because that agency is responsible for managing the 8(a) program. We found that SBA lacked adequate data on the 8(a) program in general and was not collecting any information on ANC firms’ activity specifically. SBA pointed out that the statutory language refers to “substantial” unfair competitive advantage, a change we have made to the report. SBA found our focus on this issue unreasonable, stating that all 8(a) participants have been accorded a competitive advantage. During our review, it was clear that SBA had in place no policy or procedure to make unfair competitive advantage determinations. We do not understand how SBA can ignore the fact that Congress has directed it to make these determinations specifically for ANC firms in the 8(a) program. SBA refers to the tone of our report as “unsettling” and suggests that it could lead readers to conclude that we have concerns with the fact that agencies can count 8(a) ANC contracts toward their federal small business goals. We express no concerns of the kind. Rather, our concerns, as reflected in the recommendations to SBA, pertain to the level of oversight it is exercising over 8(a) ANC activity. In an e-mail sent after the comment letter, SBA expressed disagreement with several of the recommendations but did not address the others. It stated that its annual reviews track ownership changes and the business mix of all 8(a) participants and that its regulations require contracting officers to report contract modifications. These comments are not responsive to our recommendations. Our recommendations specifically discuss monitoring ownership of ANC holding companies, tracking primary revenue generators across 8(a) ANC subsidiaries, and establishing thresholds for notification of 8(a) contract modifications. SBA disagreed with the recommendation on determining whether other small businesses are losing contracting opportunities, stating that it already does so for all 8(a) sole-source offerings. As illustrated by the examples in our report, this is not the case. SBA’s written comments are included as appendix II. The Department of Homeland Security agreed with the recommendation affecting it and indicated it would partner with SBA to ensure that the department’s contracting officers have a thorough understanding of all contracting regulations on awarding contracts under SBA’s 8(a) program. Homeland Security requested that we reflect that the department has only been in existence since 2003 and that FPDS data would not be available for the 5-year period. We agreed and added this point to our explanation of why we did not include the department in our trend analysis. In addition, the department stated that, in providing us a list of contracts awarded to firms with the DUNS numbers we provided, officials did not indicate that it included all contracts awarded to ANC firms. Homeland Security attempted to reconcile the identified missing contracts from the list of contracts awarded to ANCs; however, we still determined that the agency’s data were inadequate to include in our trend analysis. Homeland Security’s written comments are included as appendix III. The Department of the Interior agreed with the recommendation affecting it and proposed that an interagency work group be established and headed by the SBA to develop guidance for contracting officers. The department also provided specific comments on the contract awarded to an ANC firm on behalf of DOD’s Counter Intelligence Field Activity (CIFA). The Interior Department said that the referenced contract was not awarded to the ANC firm “because CIFA…had requested that firm.” The evidence we gathered from the contract file, as well as interviews with the contracting officer and the ANC firm, support the facts as we have stated them. CIFA, through a preauthorization letter, had arranged with the ANC firm to provide a variety of urgently needed services and requested that GovWorks award the contract to that firm. Interior’s written comments are included as appendix IV. NASA agreed with the recommendation affecting it and indicated that it will work with the SBA to develop guidance and to provide whatever assistance SBA may need to address the recommendations directed to it. NASA’s written comments are included as appendix V. The State Department agreed with the recommendation affecting it, stating that it will work with the SBA to develop standardized guidance to contracting officers on monitoring limitations on subcontracting and SBA notification of contract modifications. The State Department noted that the contract negotiations involving an 8(a) ANC joint venture took place in a compressed acquisition cycle and that SBA was in direct contact with the venturing parties at the time they were negotiating the contract. State concludes that because of SBA’s “simultaneous interaction” with the venturing parties and with State’s contracting officer, a formal request for SBA intervention would have been superfluous. However, our discussion focuses on the concerns about the extent of work being performed by the 8(a) ANC firm versus that of its joint venturing partner. These issues were raised within the State Department several months after the contract was awarded, and SBA was not notified at that time. The department also suggested some technical changes, which we incorporated as appropriate. The department’s written comments are included as appendix VI. The Department of Energy did not comment on the recommendation. It stated that our report gives the impression that agencies rely “significantly” on the ANC program to achieve small business goals. Our report does not state or imply that. Rather, we note that contracting officers have turned to 8(a) ANC firms as a way to help them meet their goals. The department also pointed to a perceived inconsistency in the report dealing with the “limitations on subcontracting” clause as it pertains to construction contracts. We disagree; the section in the report on this matter clearly establishes that the limitation for construction contracts is different than for other services. Energy’s written response is included as appendix VII. In official oral comments, DOD agreed with the recommendation, stating that the development of additional guidance by the department to ensure the effective oversight of 8(a) ANC contracts is necessary and that the department will work closely with SBA to develop this guidance. DOD added that, prior to commencement of these efforts, it is imperative that SBA undertake the actions we recommended for revising its relevant regulations and policies and improving its oversight practices concerning 8(a) ANC contracts, as these changes will form the basis of the new or expanded DOD-specific guidance. In official oral comments, the Department of Transportation agreed with the recommendation. Transportation also provided some technical comments that we incorporated as appropriate. We also received written comments from the Native American Contractors Association. The association believes that we should more fully acknowledge the legal and policy basis of 8(a) program rules for Native Entities. We believe the report thoroughly explains the legislative basis for 8(a) ANC firms’ procurement provisions and that it sets forth the rules for ANC firms as compared to those for other 8(a) firms. The association also raised several broader issues that impact the entire federal procurement system that it believes we should have included, such as in the areas of contract bundling, acquisition workforce, improper counting toward small business goals, and modifications to contract scope. While these are areas that we have reported on in the past, the focus of this audit was on 8(a) ANC contracting. Contrary to the association’s assertion, we do place certain findings—particularly with regard to the limitations on subcontracting and notification to SBA of contract modifications—in the context of the 8(a) program in general. For example, our recommendations to SBA on these issues are not limited solely to 8(a) ANC contracting activity. In technical comments provided separately, the association suggested that, for context, we include reference to total federal procurement spending on goods and services. We have added this information as a note to figure 3. The association’s comments are included as appendix VIII. We are sending copies of this report to the Secretaries of Defense, Energy, Homeland Security, the Interior, State, and Transportation; the Administrators of SBA and NASA; the Director, Office of Management and Budget; the Native American Contractors Association; and other interested congressional committees. We will make copies available to others upon request. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have questions about this report, please call me at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. See appendix XII for a list of major contributors to this report. We conducted our work at the Small Business Administration (SBA), including its national headquarters and district office in Anchorage, Alaska; the Departments of Defense, Energy, Homeland Security, the Interior; State, and Transportation, and the National Aeronautics and Space Administration (NASA). We traveled to Alaska and met with representatives of 30 Alaska Native corporations (ANC). We also met with representatives of the Native American Contractors Association in Washington, D.C. and interviewed officials from a number of small businesses as well as representatives from an 8(a) association. We reviewed relevant legislation, including the Alaska Native Claims Settlement Act (ANSCA) for background on the ANC corporate structure and the Small Business Act and other relevant legislation to understand the pertinent procurement advantages that ANC firms receive in the 8(a) program. To identify overall trends in the government’s contracting with ANCs, we obtained data from the Federal Data Procurement System (FPDS) for fiscal years 2000 through 2004. To gather data on federal 8(a) contracting with ANCs, we identified each ANC firm’s Data Universal Numbering System (DUNS) number and used this information to obtain data from FPDS and agencies. To assess the reliability of the procurement data used in our 5-year trend analysis, we (1) compared FPDS and agency data to verify the accuracy of the data; (2) reviewed related documentation, including contract files; and (3) worked closely with agency officials to identify and resolve any data problems. When we found discrepancies, we brought them to the agency’s attention and worked with them to correct the discrepancies before conducting our analyses. We determined that the data were sufficiently reliable for the purposes of our report. We had planned to include Homeland Security in our trend analysis, but did not do so for two reasons. First, since FPDS only includes Homeland Security contract data for part of fiscal year 2003 and beyond, we were unable to confirm the reliability of the data for the purposes of our 5-year trend analysis. Second, we found that the data from Homeland Security were inconsistent and therefore questioned the reliability of the data overall. For example, the data provided did not include contracts awarded by Immigration and Customs Enforcement and contained other data errors, such as contracts recorded with either an incorrect dollar value or as sole source when awarded competitively. To assess the trends in government 8(a) sole-source contracting with ANCs from fiscal years 2000 to 2004, we reviewed data from the six federal agencies that, according to FPDS, comprise about 85 percent of total federal dollars obligated to ANCs via the 8(a) program. These agencies were the departments of Defense, Energy, the Interior, State, and Transportation and NASA, which obligated about $2.5 billion in sole- source contracts to ANCs for fiscal years 2000 through 2004. To understand the facts and circumstances surrounding specific contract awards, we reviewed contract files, interviewed agency contracting officers, and reviewed any relevant bid protests for 16 large dollar value, sole-source 8(a) contracts at seven agencies. Whereas we included six agencies in our 8(a) sole source trend analysis, we added the Department of Homeland Security to our contract file review. To identify two sole- source contracts awarded by Homeland Security, we began reviewing the contracts with the largest dollar awards from the data provided, but had to exclude a number of the largest contracts from our file review due to errors in the data. We brought significant data errors to the attention of Homeland Security officials and the department stated that it has initiated corrective action. For the seven agencies, we selected contracts based on high ultimate award values and high dollars obligated to date that represented a variety of contractors and services. We made the initial contract selections based on the available data at that time. To assess how ANCs use the 8(a) program, we reviewed documentation and spoke with representatives from 30 Alaska Native corporations—all 13 regional and 17 selected village or urban corporations—and some of their 8(a) subsidiaries. In selecting corporations to interview, we considered diversity in geography, financial strategy and profitability, and participation in the 8(a) program. Tables 5 and 6 show the corporations included in our review. Additionally, we visited seven villages with populations that had a high percentage of Alaska Natives to understand the lifestyle and livelihood of the Alaska Native people. We selected these villages based on diversity in geography, population, average per capita income, and shareholder culture and history. We also attended a shareholders’ annual meeting at one of these villages to observe communication and relations between shareholders and corporate management. Table 7 shows the villages we visited. To understand the structure, shareholder population, and involvement in the 8(a) program of each corporation, we examined annual reports and other documentation from our selected 30 corporations and spoke with Alaska Native shareholders. We also interviewed ANC executives on corporate governance, strategies for participation in the 8(a) program, and benefits provided to shareholders. Additionally, we met with executives at selected subsidiaries participating in the 8(a) program to understand their structure, business strategies, and relationship to their parent corporations. To establish whether SBA’s oversight over ANCs in the 8(a) program is adequate, we reviewed relevant regulations and operating procedures to understand the requirements for oversight of the 8(a) program and of ANC 8(a) activity. We interviewed SBA officials at the Alaska district office and reviewed relevant files to understand that staff’s oversight role and workload priorities. Finally, we analyzed documents from and spoke with SBA headquarters officials in the Washington, D.C. office to understand their oversight of district offices and the 8(a) program and whether the officials have assessed and addressed the impact of increased ANC activity on the 8(a) program. Through our review of documentation provided by the 13 regional and 17 village or urban Alaska Native corporations (ANC) included in our review, as well as interviews with corporation representatives and shareholders, we gained an understanding of how the corporations communicate with and obtain input from their shareholders and of the benefits they provide. The ANCs communicated with their shareholders through surveys, Web sites, newsletters, annual reports, local media, shareholder committees, and annual and other periodic meetings. Some had “open door” policies, which gave shareholders the opportunity to voice their opinions to management at any time. Additionally, corporations took steps to reach out to shareholders both out of state and in the villages. For example, one corporation’s officials conducted the annual meeting via Web cast and noted that Internet attendance was beginning to outpace in-person attendance. Another corporation rotated its annual meeting among Anchorage, Seattle, and its regional hub. Additionally, several of the regional corporations regularly traveled to their villages to seek input. Steps taken by one to facilitate village outreach included moving the location of its annual meeting from the regional hub to the villages; holding the meeting in the native language; and investing in a boat to facilitate transport to the region’s villages. Shareholder preferences for benefits differed among corporations. For example, one corporation stated that its shareholders prioritized protection of their land and the subsistence lifestyle. Shareholders of other corporations placed a greater value on dividends, scholarships, training, and job opportunities. investing in an insurance company when other insurance companies were reluctant to insure shareholders’ homes; and subsidizing heating oil for residents of a small, remote community north of the Arctic Circle, absorbing a loss of $2.75-$3.00 per gallon. Some regional corporations stated that they required sizable revenues to provide benefits to a large shareholder base. Of the corporations we reviewed, the 13 regional corporations had approximately 102,000 shareholders, and the 17 village and urban corporations had about 17,000 shareholders. Overall, the corporations we reviewed saw a 31 percent increase in their number of shareholders since incorporation. The number of shareholders at two regional corporations more than doubled since incorporation. The 30 ANCs included in our review reported providing three categories of benefits dividends, other direct benefits, and Dividends: In 2004, the 30 corporations paid a total of $121.6 million in dividends. Eleven corporations issued no dividends. Of the corporations that issued dividends, payments ranged from $1.71 per share to $171.00 per share. In a given year, a shareholder may have received a dividend from his or her village corporation and an additional dividend from his or her regional corporation. Corporate officials noted that dividend payments, no matter how small, meant much to their shareholders in many rural villages where basic necessities were expensive—for example, milk cost $12 per gallon and fuel cost $5 per gallon. Original shareholders received 100 shares upon incorporation. One village corporation’s 137 shareholders owned as few as one and up to 200 shares, with an average of about 50 shares. A third of the ANCs created permanent funds to build up a reserve for future dividends. Two corporations told us that these funds allowed them to issue dividends even in years when they were unprofitable. Half of the ANCs established policies specifying an amount or percentage of net income to be distributed as shareholder dividends. For example, one corporation’s board required an increase in its annual dividend amount by 10 percent over the previous year. Another corporation annually distributed 66 percent of its average net income for the prior 5 years to shareholders. The result of this policy coupled with some unprofitable years was that in 2004, this ANC paid 100 percent of its income in dividends to shareholders. Shareholder hiring preference and job opportunities. All of the corporations we interviewed reported a hiring preference for shareholders. Some corporations extended this preference to shareholders’ families, other Alaska Natives, and/or other Native Americans. Other employment assistance programs. In addition to offering a shareholder hire preference, corporations made efforts to encourage other shareholder employment. Nine of the 30 corporations offered a management training program. Some corporations had agreements with partner companies encouraging shareholder hire. One corporation had a preference to conduct business with shareholder-owned businesses. Another corporation’s employment assistance programs included mentoring; one-on-one counseling; business and career fairs; survey of shareholders over 18 seeking employment; and tracking shareholder employment status and interests in a database. Benefits for elder shareholders. Twelve of the 30 corporations we interviewed reported issuing benefits for elder shareholders. Some corporations paid additional regular dividends to elders, while others made one-time financial payments. Two corporations made in-kind benefits for elders, such as a lunch program or a bus service. Scholarships. Almost all corporations offered scholarships for shareholders. Mentor program, which allows its youth to participate in the corporation. Corporate officials told us that they instituted mentoring and internship programs to lead to future involvement of shareholders in management and leadership roles. Burial assistance. Twenty-two of the 30 corporations reported providing some kind of assistance to the family of a deceased shareholder. Forms of burial assistance include cash, life insurance payments, or in-kind donations. Land leasing, gifting or other use. Most of the village and urban corporations we interviewed leased, gifted, or made other use of the land given to the village corporation in the Alaska Native Claims Settlement Act settlement for shareholders. For example, one corporation gifted five acres to any shareholder who requested it. Community infrastructure. Several corporations invested in the infrastructure of their villages. For example, after the Department of the Interior’s Bureau of Indian Affairs ceased barge service to its remote village, one corporation established a transportation company that became the only mechanism to bring goods to the community. Other projects included remodeling the community washateria and administering and subsidizing a village’s cable and Internet utilities. Support of the subsistence lifestyle. Corporations took steps to protect and maintain the subsistence lifestyle of their shareholders. One corporation built in subsistence leave into its personnel policy. Another corporation leased its land for “fish camps,” or plots along a river for shareholders to catch and smoke fish in the summertime. Cultural preservation. Twenty-four of the 30 corporations we interviewed invested in cultural and heritage programs, which included museums, culture camps, or native language preservation. Establishment and support of affiliated foundations or nonprofit organizations. Twenty-one of the 30 corporations established affiliated foundations or nonprofit organizations. Donations to other nonprofit organizations. Almost all of the corporations donated to various nonprofit organizations. For example, one corporation donated to organizations that advocate for Alaska Natives, such as the Alaska Federation of Natives, Alaska Native Arts Foundation, Alaska Native Justice Center, and Get Out the Native Vote. Support to other corporations. Some regional corporations provided various kinds of assistance to the village corporations in their regions. For example, one regional corporation is trying to develop 8(a) partnerships with its village corporations to help them enter the 8(a) program with lower start-up and administrative costs. Other regional corporations provided recordkeeping, natural resources, and regulatory and community planning services for their village corporations. One Alaska Native corporation that we reviewed owned seven subsidiaries participating in the 8(a) program, with six of them marketing their abilities to perform work in the same line of business. In addition to the individual named above, Michele Mackin, Assistant Director; Theresa Chen; David E. Cooper; Barry DeWeese; Art James, Jr.; Julia Kennon; Jeff Malcolm; Meaghan Marshall; Sylvia Schatz; Robert Tagorda; and Tatiana Winger made key contributions to this report.
Alaska Native corporations (ANC) were created to settle land claims with Alaska Natives and foster economic development. In 1986, legislation passed that allowed ANCs to participate in the Small Business Administration's (SBA) 8(a) program. Since then, Congress has extended special procurement advantages to 8(a) ANC firms, such as the ability to win sole-source contracts for any dollar amount. This report identifies (1) trends in the government's 8(a) contracting with ANC firms, (2) the reasons agencies have awarded 8(a) sole-source contracts to ANC firms and the facts and circumstances behind some of these contracts, and (3) how ANCs are using the 8(a) program. GAO also evaluated SBA's oversight of 8(a) ANC firms. While representing a small amount of total federal procurement spending,8(a) obligations to firms owned by ANCs increased from $265 million in fiscal year 2000 to $1.1 billion in 2004. In fiscal year 2004, obligations to ANC firms represented 13 percent of total 8(a) dollars. Sole-source awards represented about 77 percent of 8(a) ANC obligations for the six procuring agencies that accounted for the vast majority of total ANC obligations over the 5-year period. These sole-source contracts can represent a broad range of services, as illustrated in GAO's contract file sample, which included contracts for construction in Brazil, training of security guards in Iraq, and information technology services in Washington, D.C. In general, acquisition officials at the agencies reviewed told GAO that the option of using ANC firms under the 8(a) program allows them to quickly, easily, and legally award contracts for any value. They also noted that these contracts help them meet small business goals. In reviewing selected large, sole-source 8(a) contracts awarded to ANC firms, GAO found that contracting officials had not always complied with certain requirements, such as notifying SBA of contract modifications and monitoring the percent of work that is subcontracted. ANCs use the 8(a) program to generate revenue with the goal of providing benefits to their shareholders. These benefits take many forms, including dividend payments, scholarships, internships, and support for elder shareholders. A detailed discussion of the benefits provided by the ANCs is included as appendix X of the report. Some ANCs are heavily reliant on the 8(a) program for revenues, while others approach the program as one of many revenue-generating opportunities. GAO found that some ANCs have increasingly made use of the congressionally authorized advantages afforded to them. One of the key practices is the creation of multiple 8(a) subsidiaries, sometimes in highly diversified lines of business. From fiscal year 1988 to 2005, ANC 8(a) subsidiaries increased from one subsidiary owned by one ANC to 154 subsidiaries owned by 49 ANCs. SBA, which is responsible for implementing the 8(a) program, has not tailored its policies and practices to account for ANCs' unique status and growth in the 8(a) program, even though SBA officials recognize that ANCs enter into more complex business relationships than other 8(a) participants. Areas where SBA's oversight has fallen short include: determining whether more than one subsidiary of the same ANC is generating a majority of its revenue in the same primary industry, consistently determining whether awards to 8(a) ANC firms have resulted in other small businesses losing contract opportunities, and ensuring that the partnerships between 8(a) ANC firms and large firms are functioning in the way they were intended. During our review, SBA officials agreed that improvements are needed and said they are planning to revise their regulations and policies.
The foster care system has grown dramatically in the past two decades, with the number of children in foster care nearly doubling since the mid- 1980s. Concerns about children’s long stays in foster care culminated in the passage of ASFA in 1997, which emphasized the child welfare system’s goals of safety, permanency, and child and family well-being. The Administration for Children and Families (ACF) at HHS is responsible for the administration and oversight of federal funding to states for child welfare services under Titles IV-B and IV-E of the Social Security Act. In 2000, ACF established a new federal review system to monitor state compliance with federal child welfare laws. One component of this system is the Child and Family Services Review (CFSR), which assesses state performance in achieving the three goals ASFA emphasized. The CFSR process includes a self-assessment by the state, an analysis of state performance in meeting national standards established by HHS and an on- site review by a joint team of federal and state officials. Two titles under the Social Security Act provide federal funding targeted specifically to foster care and related child welfare services. Title IV-E provides an open-ended individual entitlement for foster care maintenance payments to cover a portion of the food, housing, and incidental expenses for all foster children whose parents meet certain federal eligibility criteria. Title IV-E also provides payments to adoptive parents of eligible foster children with special needs. Special needs are characteristics that can make it more difficult for a child to be adopted and may include emotional, physical or mental disabilities, emotional disturbance, age, being a member of a sibling group, or being a member of a minority race. Title IV-B provides limited funding for child welfare services to foster children, as well as children remaining in their homes. In federal fiscal year 2001, total Title IV-E spending was $5.6 billion and total Title IV-B spending was $576 million. Two key provisions introduced by ASFA were intended to help states move into safe, permanent placements those foster children who are unable to safely return home in a reasonable amount of time. Under the fast track provision, states are not required to pursue efforts to prevent removal from home or to return a child home if a parent has (1) lost parental rights to that child’s sibling; (2) committed specific types of felonies, including murder or voluntary manslaughter of the child’s sibling; or (3) subjected the child to aggravated circumstances, such as abandonment, torture, chronic abuse, or sexual abuse. In these egregious situations, the courts may determine that services to preserve or reunite the family—that is, the “reasonable efforts” requirement established in the Adoption Assistance and Child Welfare Act of 1980 (Public Law 96-272)— are not required. Once the court makes such a determination, the state must begin within 30 days to find the child an alternative permanent family or other permanent arrangement. In addition, the Abandoned Infants Assistance Act of 1988, as amended in 1996, requires states to expedite the termination of parental rights for abandoned infants in order to receive priority to obtain certain federal funds. The second provision requires states to file a TPR with the courts if (1) an infant has been abandoned; (2) the parent committed any of the felonies listed in the fast track provision; or (3) the child has been in foster care for 15 of the most recent 22 months. States may exempt children from this requirement if the child is placed with a relative; the state has not provided services needed to make the home safe for the child’s return; or the state documents a compelling reason that filing a TPR is not in the child’s best interest. ASFA also contained other provisions to help states focus on the length of time children were remaining in care. For example, ASFA requires states to hold a permanency planning hearing for each child in foster care at least every 12 months, during which the court determines the future plans for a child—for example, whether the state should continue to pursue reunification with the child’s family or begin to pursue some other permanency goal. Prior to ASFA, these permanency hearings had been required to occur by the 18th month of a child’s stay in care. For those children who will not be reunified with their families, ASFA also requires that the permanency plan document the steps taken to place the child and finalize the adoption or legal guardianship. At a minimum, the permanency plan must document any child-specific recruitment efforts taken to find an adoptive family or legal guardian for a child. In addition, ASFA includes three provisions that are specific to interjurisdictional adoption issues. These provisions (1) require assurances in state plans that a state will not delay or deny the placement of a child for adoption when an approved family is available in a different state or locality, (2) require assurances that the state will develop plans for the effective use of cross-jurisdictional resources to facilitate permanent placements of waiting children, and (3) make ineligible for certain federal funds any state that is found to deny or delay the placement of a child for adoption when an approved family is available in another jurisdiction. ASFA also authorized a new funding source dedicated to adoption-related activities. Prior to ASFA, the Congress established the family preservation and family support program under subpart 2 of Title IV-B of the Social Security Act, authorizing funds to provide two categories of services: family preservation and community-based family support services. ASFA reauthorized the program, renaming it Promoting Safe and Stable Families (PSSF) and adding two new funding categories: adoption promotion and support services and time-limited family reunification services. HHS program instructions specify that states must have a strong rationale for spending less than 20 percent of their PSSF funds on each of the four defined categories. The Congress authorized $305 million for the PSSF program in fiscal year 2001. A research firm found that state expenditures of federal PSSF funds on adoption promotion and support activities increased from $50 million in fiscal year 1999 to $79 million in fiscal year 2001, representing a 58 percent increase. In January 2002, the PSSF program was reauthorized, authorizing $305 million for each of fiscal years 2002 through 2006, along with an additional $200 million in discretionary grant funds for each of those years. ASFA created the adoption incentive payment program, which financially rewards states for increasing numbers of finalized adoptions. The states have the flexibility to use the incentive payment funds for any child welfare related initiative. To benefit from the incentive payment, states must exceed an adoption baseline established for their particular state. The baseline for the initial award year— fiscal year 1998—was each state’s average number of finalized adoptions in fiscal years 1995, 1996, and 1997. After fiscal year 1998, a state’s baseline is based on any previous fiscal year that has the largest number of finalized adoptions, beginning with fiscal year 1997. States receive a fixed payment of $4,000 for each foster child who is adopted over the baseline and an extra $2,000 for each adopted child characterized as having a special need. States have earned a total of more than $127 million in incentive payments for adoptions finalized in fiscal years 1998, 1999 and 2000. ASFA expanded the use of federal child welfare demonstration waivers that allow states to test innovative foster care and adoption practices. In 1994, the Congress gave HHS the authority to establish up to 10 child welfare demonstrations that waive certain restrictions in Titles IV-B and IV-E of the Social Security Act and allow broader use of federal funds. ASFA authorized 10 additional waivers in each year between fiscal years 1998 and 2002 to ensure that more states had the opportunity to test innovations. States with an approved waiver must conduct a formal evaluation of the project’s effectiveness and must demonstrate the waiver’s cost neutrality—that is, a state cannot spend more in Title IV-B and IV-E funds than it would have without the waiver. Projects generally are to last no more than 5 years. Although funding for this program is scheduled to end in fiscal year 2002, the Congress expects to consider its reauthorization later this year. HHS compiles data on children in foster care and children who have been adopted from state child welfare agencies in the Adoption and Foster Care Analysis and Reporting System (AFCARS). HHS is responsible for collecting and reporting data and verifying their quality. States began submitting AFCARS data to HHS in 1995. Twice a year, states are required to submit data on the characteristics of children in foster care, foster parents, adopted children and adoptive parents. Prior to AFCARS, child welfare data was collected in the Voluntary Cooperative Information System (VCIS), operated by what was then called the American Public Welfare Association. Since reporting to VCIS was not mandatory, the data in the system were incomplete. In addition, the data submitted were inconsistent because states used reporting periods and definitions for various data elements that differed from each other. The number of annual adoptions has increased since the implementation of ASFA; however, data limitations restrict comparative analysis of other outcomes and characteristics of children in foster care. Foster care adoptions grew from 31,004 in fiscal year 1997 to 48,680 in fiscal year 2000, representing an increase of 57 percent since ASFA was enacted.However, current data constraints make it difficult to determine what role ASFA played in this increase. The lack of reliable and comparable pre- and post-ASFA data at this time limits our ability to analyze how other foster care outcomes or children’s characteristics have changed. For example, reliable pre-ASFA child welfare data are available from the University of Chicago for a limited number of states, but they cannot be matched against the post-ASFA data available from HHS. Current data do, however, provide some information about the characteristics and experiences of foster children after ASFA. For example, children leaving care between 1998 and 2000 spent a median of approximately 1 year in care. Of these children, those who were adopted spent more time in foster care—a median of approximately 3 1/2 years. Children most frequently returned home after a stay in care, but about 33 percent of those children re-entered foster care within 3 years. Adoptions from state child welfare foster care programs have increased nationwide by 57 percent since ASFA was implemented, while changes in other outcomes are less clearly discernible. Adoptions began to increase prior to the enactment of federal child welfare reforms (see fig. 1). For example, between 1995 and 2000, annual adoptions of children in foster care increased by 89 percent from approximately 26,000 to nearly 49,000.Adoptions generally increased between 8 percent and 12 percent year to year between 1995 and 2000, except in 1999 when they increased by 29 percent over 1998 adoptions. This increase in overall adoptions of children in foster care is accompanied by a parallel increase in the adoptions of children with special needs. In at least one of the 3 years following the implementation of ASFA, all states increased their adoptions over the average number of adoptions finalized between 1995 and 1997. This average represents the baseline established for each state for participation in the adoption incentive program. A comparison of the states’ baselines with their average number of adoptions for the 3 years following ASFA shows that 10 states at least doubled the annual average number of adoptions between 1998 and 2000 (see table 1). The role ASFA played in the increase in adoptions after 1997, however, is unclear. Similarly, whether the number of foster children being adopted will continue to rise in the future is unknown. While ASFA may have contributed to the adoptions of these children, other factors may have also played a role. For example, HHS officials have noted that earlier state child welfare reform efforts may be linked to the observed increase in adoptions. To better understand why adoptions have increased and to evaluate ASFA’s impact, HHS has asked the University of Chicago to use its data from several states to track groups of foster children over time to determine if the percentage of children adopted from foster care has changed and if adoptions occur more quickly now. Since it can take several years for foster children to be adopted, and ASFA has only been in existence for a few years, evidence of ASFA’s effect may not be available for some time. ASFA’s effect on other foster care outcomes, such as family reunifications, is also difficult to determine. Lack of comparable and reliable data on foster children, before and after ASFA, make it difficult to know how ASFA has affected the child welfare system. While HHS officials report that some data are reliable and can provide a picture of children in foster care post-ASFA, they state that the child welfare data covering pre-ASFA periods are not reliable. According to HHS data specialists, early data available suffered from problems such as low response rates and data inconsistencies. Since 1998, however, HHS data specialists have observed marked improvements in the data submitted to HHS by states and attribute the changes to several factors. These factors include the provision of federal technical assistance to the states on data processing issues and the use of federal financial penalties and incentives. HHS data specialists also note that states are improving their data in response to the use of outcome measures in the Child and Family Services Reviews and the annual publication of child welfare outcomes for each state. According to HHS, these data improvements make it impossible to determine whether observed changes in outcomes from one year to the next are the result of changes in data quality or changes in state performance. HHS expects that the data will stabilize over time and can eventually be used as a reliable measure of state performance. Although HHS cannot provide reliable pre-ASFA data, research conducted at the University of Chicago provides reliable pre-ASFA information on some foster care outcomes for 12 states. However, the University of Chicago’s pre-ASFA data cannot be compared with HHS’s post-ASFA data. Unlike other child welfare data sources that collect periodic data on children in care, the University of Chicago’s system follows all of the children entering foster care in an individual year and collects data on them until they leave to determine their final outcomes. This approach provides accurate information on the experiences of all foster children over time and does not over represent the experiences of certain children, such as those who stay in care for extended periods of time. However, the use of this different measurement technique prevents comparisons of the University of Chicago’s pre-ASFA data with HHS’s post-ASFA data. Although pre-ASFA data are limited and more time is needed to determine how ASFA has affected the child welfare system, current data do shed some light on the characteristics and experiences of the more than 741,000 children who exited foster care between 1998 and 2000. According to HHS data for this time period, children left foster care after a median length of stay of approximately 1 year. Prior to leaving foster care, children typically lived in 1 or 2 foster care placements and a very small portion of them were abused or neglected by their foster care providers. Most foster children reunified with their families; however, approximately 33 percent of the children who went home to their families in 1998 subsequently returned to foster care within 3 years, for reasons such as additional abuse and neglect at home. A smaller percentage of children left foster care through adoption. The majority of children adopted from foster care were under age 12 and classified as having special needs. Limited evidence suggests that few adopted children returned to the child welfare system. About half of the children leaving foster care exit within one year; however, the data show slight changes in the length of children’s stays during 1998-2000 (see table 2). In 1998, the median length of stay for children exiting care was 11 months—by 2000, it had risen to 12 months. In contrast, the median length of stay for adopted children dropped from 43 months in 1998 to 39 months in 2000 (see table 3). Determining whether these shifts represent real changes in the amount of time children spend in foster care or whether they simply reflect the recent improvements in HHS data is difficult. Twenty-three states reported in our survey that in fiscal year 2000 adopted children spent an average of 18 months living with the family that eventually adopted them prior to their adoption being finalized. The amount of time children spend in foster care varies from state to state. For example, the median length of stay for children exiting care in Delaware was about 5 months in 2000, while in Illinois it was close to 4 years in 2000. Differences in state foster care stays may be linked to child welfare practices. For example, higher adoption rates can play a role in increasing the median length of stay figures, since adopted children stay in foster care for longer periods of time. Conversely, higher reunification rates can play a role in decreasing the median length of stay, since reunified children spend less time in foster care. In Delaware, most children who left care reunified with their families and only a small percentage were adopted. In contrast, Illinois had lower reunification rates and one of the highest yearly adoption rate averages in the country. Illinois officials explained that they work extensively with families to prevent the need for foster care and only bring children into care when these efforts have failed. Consequently, although data are not available, Illinois officials believe that the children in their care are less likely to reunify with their families than foster children in other states that they believe may not work as extensively with families before children are removed from their homes. Before they leave foster care, most children live in one or two different placements (see table 4). Many children have only one placement during their foster care stay, but a few experience five or more placements. Adopted children tend to experience a greater number of foster care placements (see table 5). Adopted children may have more foster care placements than other children, in part, because of their longer foster care stays. According to some researchers, children experience more placements the longer they are in foster care. During their foster care stays, a small percentage of children are abused or neglected by their caregivers. Our survey results indicate that the median percentage of children abused or neglected while in foster care during 1999 and 2000 was 0.60 percent and 0.49 percent, respectively. Maltreatment rates in foster care in 2000 ranged from a high of 2.74 percent in the District of Columbia to a low of 0.02 percent in Nebraska. On average, less than one-third of the children in foster care exit each year. Children exit foster care in a number of ways, including reunifying with their families, being adopted, emancipation, or entering a guardianship arrangement (see table 6). Upon leaving foster care, most children returned home to the families they had been living with prior to entering foster care. However, a number of these children re-entered foster care for a number of reasons, such as additional abuse and neglect by their families (see table 7). Although most children reunify with their families, the second most common way of exiting foster care is through adoption. The children adopted from foster care have a wide variety of characteristics, yet the data indicate some general themes. Most children adopted from foster care have at least one special need that may make placing a child with an adoptive family challenging. On average, 85 percent of the children adopted in 1998, 1999, and 2000 were classified as having at least one special need that would qualify them for adoption subsidies under Title IV- E. Eighteen states reported in our survey that, on average, 32 percent of the children adopted from foster care in 2000 had three or more special needs. In addition, according to HHS data, children adopted from foster care are equally likely to be male or female, slightly more likely to be black (see table 8), and much more likely to be under age 12 (see table 9). The gender and race/ethnicity distributions of children in foster care are similar to those for children who are adopted. However, the age distribution differs between the two groups of children. For example, in 1999, approximately 46 percent of the children in foster care were 11 years or older. As noted for other outcomes, the lack of reliable and national pre-ASFA data make it difficult to determine whether the rate at which adoptions encountered problems has changed since ASFA was enacted. However, limited data suggest that problems occur in a small percentage of foster care adoptions. According to our survey, about 5 percent of adoptions planned in fiscal years 1999 and 2000 disrupted prior to being finalized.States also reported that approximately 1 percent of adoptions finalized in these years legally dissolved at a later date and that about 1 percent of the children who were adopted in these years subsequently re-entered foster care. However, little time has elapsed since these adoptions were finalized and some of these adoptions may legally dissolve at a later date. HHS data similarly indicate that about 1 percent of the children entering foster care each year have previously been adopted. States reported in our survey that adopted children return to foster care for different reasons, including abuse or neglect by their adopted families, behavior problems which are too difficult for their adoptive families to handle, or a child’s need for residential care. While few states were able to provide data on the numbers of children affected by ASFA’s fast track and 15 of 22 provisions, some reported on circumstances that make it difficult to use these provisions for more children. In addition, HHS collects very little data on the use of these provisions. Data from four states that provided fast track data in response to our survey indicate that they do not use this provision frequently. Officials at our site visits told us that they use the fast track provision for a small number of children, primarily those who have experienced serious abuse or whose parents had involuntarily lost parental rights for other siblings. However, they described several court-related issues that make it difficult to fast track more children, including court delays and a reluctance on the part of some judges to relieve the state from reunification efforts. Survey responses from the few states that provided data on the 15 of 22 provision indicate that these states do not file TPRs for many children who are in care for 15 months. Officials at the six states we visited believe that ASFA’s 15 of 22 time standard has helped them make more timely permanency decisions, but reported that they exempt many children from this requirement for a number of reasons, including difficulties in finding adoptive parents. Few states were able to provide data on their use of the fast track provision in response to our survey and HHS does not collect this data from the states. As a result, we do not have sufficient information to discuss the extent to which states are using this provision. As shown in table 10, the data from a handful of states suggest the infrequent use of fast track. In fiscal year 2000, for example, about 4,000 children entered the child welfare system in Maryland, but only 36 were fast tracked. Child welfare officials in the six states we visited told us that they used ASFA’s fast track provision for a relatively small number of cases. Three states indicated that they fast tracked abandoned infants, while four states reported using fast track for cases involving serious abuse, such as when a parent has murdered a sibling; however, some state officials also noted that few child welfare cases involve these circumstances. In addition, five states reported that they would fast track certain children whose parents had involuntarily lost parental rights to previous children if no indication exists that the parents have addressed the problem that led to the removal of the children. Officials in five of the states we visited described several court-related issues that hindered the greater use of the fast track provision. However, because of the lack of data on states’ use of fast track, we were unable to determine the extent of these problems. Officials in these states told us that some judges or other legal officials are at times reluctant to approve a state’s fast track request. According to officials in Massachusetts, North Carolina, and Maryland, some judges believe that parents should always be given the opportunity to reunify with their children. According to child welfare staff for a county in North Carolina, the courts had recently denied the county’s request to fast track several cases and ordered the county to provide services to the families involved. In one case, a judge approved a fast track request involving a child who had suffered from shaken baby syndrome, but refused a similar request on a sibling who was born a few months after the shaking episode. County staff stated that the judge’s decision was based on the fact that the parents had not hurt the newborn and should be given an opportunity to demonstrate their ability to care for this child. Three states we visited described other court problems related to the fast track provision. For example, state officials in North Carolina told us that delays in scheduling TPR trials in the state undermine the intent of fast tracking. They noted that the agency may save time by not providing services to a family, but the child may not be adopted more quickly if it takes 12 months to schedule the TPR trial. Officials in Massachusetts expressed similar concerns about court delays experienced in the state when parents appeal a court decision to terminate their parental rights. Finally, a Massachusetts official explained that the state is cautious about using the fast track provision due to concerns that not providing services to parents could undermine their TPR case. According to a Massachusetts official, the state obtained court approval to fast track a case, but subsequently lost the TPR trial in part because the judge found that the parents did not receive services to help them reunify with their child. Other difficulties in using fast track to move children out of foster care more quickly are related to the specific categories of cases that are eligible to be fast tracked. Officials in five states told us that they look at several factors when considering the use of fast track for a parent who has lost parental rights for other children. In some of these cases, a different birth father may be involved. Child welfare officials told us that they are obligated to work with the father to determine if he is willing and able to care for the child. According to Maryland officials, if the agency is providing services to the father to facilitate reunification, pursuing a fast track case for the mother will not help the child leave foster care more quickly. In addition, child welfare officials in Massachusetts and Illinois emphasized that a parent who has addressed the problems that led to a previous TPR should have an opportunity to demonstrate the ability to care for a subsequent child. For example, they would not necessarily fast track a substance-abusing mother who lost custody of a previous child if she has engaged in treatment and addressed her parenting issues. Regarding the fast track category involving parents who have been convicted of certain felonies, child welfare officials in Massachusetts and Texas described this provision as impractical due to the time it takes to obtain a conviction. Massachusetts officials told us that, in most cases, the children are removed at the time the crime is committed and judges will not approve the fast track in these cases until the parent is actually convicted, which is usually at least a year after the actual crime. As a result, the state must provide services to reunify the family until further evidence of the parent’s unfitness is documented. Finally, in Massachusetts, Texas, and Maryland, officials reported that it can be difficult to prove that a parent subjected a child to aggravated circumstances, such as torture or sexual abuse. According to these officials, the time and effort to go through additional court hearings to demonstrate the aggravated circumstances is not worthwhile; instead, the child welfare agency chooses to provide services to the family. In response to our survey, three states provided information about why they did not fast track cases that fell into one of the fast track categories, citing reasons that were similar to those reported by our site visit states. For example, Minnesota estimated that in 25 percent of the cases, the state was working to reunify the children with the noncustodial parent. In an additional 25 percent of the cases, the court did not approve the state’s request to fast track the case. In the remaining cases, Minnesota did not consider fast track to be in the child’s best interests. A Minnesota official explained that in certain circumstances, the agency would try to reunify a family, even if the parents had subjected the child to aggravated circumstances or lost custody of a previous child. For example, if a parent assaulted a child resulting in a broken bone—which would be considered aggravated circumstances under Minnesota law—the agency might not consider a TPR to be in the child’s best interests if the assault was a single incident for which the parent accepted responsibility and the child has otherwise had a positive relationship with his or her parent. In addition, the state might not fast track a child born to a mother who had lost custody of a previous child, if the TPR occurred years before and the mother’s circumstances had since improved. Most states do not collect data on their use of ASFA’s 15 of 22 provision. In response to our survey, only nine states were able to provide information on the number of children for whom the state filed a TPR due to the 15 of 22 provision or the number of children who were exempted from this provision. In addition, HHS does not systematically track this data. As part of its Child and Family Services Reviews (CFSR), HHS collects some limited information on the 15 of 22 provision. Specifically, HHS asks each state to discuss its compliance with the 15 of 22 provision and directly assesses compliance during its on-site review of a limited number of case records, if the case under review involves a child who has been in care for 15 months. For most of the states that provided data on their use of the 15 of 22 provision in response to our survey, the number of children exempted from the provision greatly exceeded the number of children to whom it was applied (see table 11). For example, while Oklahoma filed over 1,000 TPRs primarily because the child had been in foster care for 15 of the most recent 22 months, it did not file a TPR for an additional 2,900 children. Similarly, in 1999, we reported on states’ efforts to review all children who were already in foster care for 15 months when ASFA was enacted to determine if a TPR should be filed or to document an exemption if a TPR was not appropriate, as required by ASFA. The 12 states that had data reported exempting 60 percent of the children they reviewed. Officials in all six site visit states told us that establishing specific timeframes for making permanency decisions about children in foster care has helped their child welfare agencies focus their priorities on finding permanent homes for children more quickly. Two of the states we visited—Texas and Massachusetts—created procedures prior to ASFA to review children who had been in care for a certain length of time and decide whether continued efforts to reunify a family were warranted. Other states had not established such timeframes for making permanency decisions before the 15 of 22 provision was enacted. The director of one state child welfare agency told us that, prior to ASFA, the agency would work with families for years before it would pursue adoption for a child in foster care. In response to ASFA’s requirement to hold permanency hearings every 12 months for children in foster care, five of the states we visited emphasized that they now try to make decisions about a child’s permanent placement by the time the child has been in care for 12 months. The director of one state child welfare agency noted that the 15 of 22 provision does not fit well with other child welfare timeframes—he stated that having more frequent permanency hearings would force states to make more timely decisions and would be less administratively awkward to implement. Officials in Oregon, Maryland, and North Carolina stated that that the pressure of these new timeframes has helped child welfare staff work more effectively with parents, informing parents up front about what actions they have to take in the next 12 to 15 months in order to reunify with their children. Conversely, private agency staff in three states expressed concern that pressure from these timeframes could push the child welfare agency and the courts to make decisions too quickly for some children. In one state, for example, staff with a private agency that recruits adoptive families for the state were worried that making decisions so quickly may lead to more children re-entering foster care after being adopted or reunified with their families. Child welfare officials in the six states we visited described several circumstances under which they would not file a TPR on a child who was in care for 15 of the most recent 22 months. In five of the six states, these officials told us that the provision is difficult to apply to children with special needs for whom adoption may not be a realistic option, such as adolescents and children with serious emotional or behavioral problems. Officials from Maryland and North Carolina reported that, in many cases, the child welfare agency exempts these children from the provision because either the agency or the courts do not consider it to be in their best interest to be legal orphans—that is, to have their relationship to their parents legally terminated, but have no identified family ready to adopt them. State officials in Oregon told us that state law requires that parental rights be terminated solely for the purpose of adoption, so as to avoid creating legal orphans. Officials in other states said that while the child welfare agency would like to pursue a TPR, some courts are not willing to do so unless a potential adoptive family has been identified for the child. Officials in four states noted that many adolescents remain in long-term foster care. In some cases, they have strong ties to their families, even if they cannot live with them, and will not consent to an adoption. In other cases, the teenager is functioning well in a stable situation with a relative or foster family that is committed to the child but unwilling to adopt. For example, officials in a child welfare agency for a county in North Carolina told us about a potentially violent 16-year old foster child who had been in a therapeutic foster home for 10 years. The family was committed to fostering the child, but did not want to adopt him because they did not have the financial resources to provide for his medical needs and because they did not want to be responsible for the results of his actions. Similarly, four states reported difficulties in recruiting adoptive families for children with severe behavioral or medical problems who will require long-term treatment in a residential facility. State officials in Massachusetts told us that some of these children have such severe problems that they are not ready to live in a family setting. Staff in a county child welfare office in North Carolina told us that mentally ill children whose parents voluntarily place them in state custody because they cannot afford the residential services their children require are generally exempted from the 15 of 22 provision. In Illinois, child welfare staff told us that some parents need a little more than 15 months to address the problems that led to the removal of their children. If the child welfare agency is reasonably confident that the parents will be able to reunify with their children in a few months, the agency will not file a TPR for a child who has been in foster care for 15 months. Similarly, staff in a county child welfare office in North Carolina told us about two cases involving adolescent children in long-term foster care who became pregnant and had a child while in foster care. In these cases, the adolescent mothers remained in foster care with their children. Staff explained that these young mothers needed more than 15 months to be able to parent their children independently, given their own troubled pasts. As long as the mothers were making reasonable progress in parenting their children, the state would not file a TPR on these infants even though they were in foster care for more than 15 months. One of the mothers had recently reunified with her child, now 2 years old, and was expected to regain legal custody of the child shortly. Child welfare officials in four states observed that parents must have access to needed services, particularly substance abuse treatment, soon after a child enters care in order for the child welfare system to determine if reunification is a realistic goal by the time a child has been in care for 15 months. Officials in Texas, Oregon, and Maryland reported that the lack of appropriate substance abuse treatment programs that address the needs of parents makes it difficult to get parents in treatment and stable by the 15th month. Juvenile court judges in Massachusetts and Oregon told us that they would not necessarily pursue a TPR when a child has been in care for 15 of the most recent 22 months if parents are engaged in substance abuse treatment and showing progress toward reunification. State officials in Massachusetts, North Carolina, and Maryland noted that delays in scheduling TPR trials and delays in hearing appeals of TPR decisions can undermine the use of the 15 of 22 provision to achieve permanency for children in a timely manner. For example, Massachusetts officials noted that appeals of TPR decisions face significant delays— simply scheduling the appeal trial can take a year. In response to our survey, a few states provided explanations regarding why they did not file a TPR on children who had been in care for 15 of the most recent 22 months. The reasons reported by seven states were similar to those reported during our site visits, although they varied significantly among the seven states (see table 12). For example, the District of Columbia estimated that it did not file a TPR for about 60 percent of the children who were in care for 15 months because the state expected that these children would soon be reunified with their parents. In contrast, Rhode Island reported that 600 children were in care for 15 months without having a TPR filed and estimated that 67 percent of them were adolescents with permanent plans of either independent living or long- term foster care. States reported in our survey that they most commonly used their adoption incentive payments and PSSF adoption promotion and support services funds to recruit adoptive parents and to provide post adoption services. For example, Arizona has used its incentive payments to fund performance-based contracts that reward agencies for finding adoptive families for groups of siblings, children aged 10 or older and those from minority groups. Utah, on the other hand, has used its PSSF funds to sponsor a post adoption Web site for adoptive families. In addition to recruitment and post adoption services, we found that states have spent these ASFA funds on a variety of other child welfare activities, including hiring and training social workers. Our survey results on states’ use of new adoption-related funds mirror findings from a recent study, which found that the top two uses of incentive payments were for the recruitment of adoptive families and the provision of post adoption services (see table 13). For example, states are using ASFA’s adoption-related funds to pursue a variety of activities to recruit adoptive parents. Child welfare officials in all of the states we visited reported that they are struggling to recruit adoptive families for older children and those with severe behavioral or medical problems. To meet this challenge, states are investing in activities designed to match specific foster children with adoptive families, as well as general campaigns to recruit adoptive families. Child specific recruitment efforts include: featuring children available for adoption on television, hosting matching parties for prospective adoptive parents to meet children available for adoption, and taking pictures and videos of foster children to show to prospective families. Massachusetts used its incentive payments to fund recruitment videos to feature the 20 children who had been waiting the longest for adoptive families, while Nebraska used its incentive funds to improve the profiles of waiting children on its state Web site. General recruitment efforts being funded by states include: promoting adoption through National Adoption Month events, hiring additional recruiters, and partnering with religious groups. For example, Maryland has used its PSSF funds to partner with faith-based organizations to recruit adoptive families primarily for black children, while Colorado used its incentive payments to hire a public relations firm to develop a campaign to recruit minority parents. According to our survey results, 18 states are using PSSF funds to create or expand both child specific recruitment efforts and general recruitment programs. States are also investing adoption incentive payments and PSSF funds in services to help adoptive parents meet the challenges of caring for children who have experienced abuse and neglect. Adoptive parents sometimes have difficulties managing the emotional and behavioral problems of children from foster care. Some researchers believe that post adoption services may help stabilize these adoptive families. However, available research on post adoption services is largely descriptive, with little information on the effectiveness of such services. During our site visits, officials in Massachusetts and Illinois pointed out that the population of adopted children had increased significantly in recent years and that the availability of post adoption services was essential to ensure that these placements remain stable. Approximately 60 percent of the states responding to our survey used their adoption incentive payments or their PSSF funds or both for post adoption services. Our survey results show that 21 states used PSSF dollars to initiate or expand post adoption counseling and support groups. In addition, 20 states reported using PSSF dollars to create or expand services to preserve adoptions and help adoptive families maintain their new relationships. Thirteen states also reported that they are providing respite services with PSSF adoption promotion and support services dollars. In addition to these core post adoption services, some states noted both in our survey and in other reports that they are providing a range of other services to adoptive families, including information and referral networks, mentoring, and recreational opportunities. For example, California has used some of its adoption incentive funds to pay for therapeutic camps and tutoring sessions for adopted children. In addition, Minnesota has used PSSF funds to teach adoptive parents how to care for children with fetal alcohol syndrome and children who find it difficult to become emotionally attached to caregivers. Although the 46 states responding to our survey reported that they are most frequently using the money for the activities described above, over two-thirds of them also reported that they are investing some of these funds in other services. Many states are using PSSF funds to provide preadoptive counseling to help children and parents prepare for the emotional challenges of forming a new family. Similarly, some states are using incentive payments and PSSF funds to train foster families, adoptive families, and service providers. For example, Arkansas used incentive money to help families attend an adoptive parent conference and Nevada used PSSF dollars to fund an adoption-training curriculum in Spanish. Likewise, Montana used incentive payments to provide adoption training to therapists who agree to provide services to children in foster care. Kentucky, on the other hand, has used incentive funds to train judges and attorneys on adoption matters. In addition, we found that some states are taking advantage of the flexibility allowed in the use of adoption incentive payments to increase the number of people working on child welfare cases. During our site visit to Oregon, child welfare officials told us that the lack of legal resources has inhibited the state’s ability to quickly pursue court cases against birth parents to terminate their parental rights and thereby free a child for adoption. To address this issue, Oregon has used its adoption incentive payments to contract for additional lawyers to litigate these cases. According to our survey results, 6 states have used the incentive payments to hire or contract additional legal staff and 13 states have used these funds to hire or contract additional social workers. Noting that state adoption numbers may level off in the future, a recent report questioned the sustainability of investments made with adoption incentive payments.Similarly, three states we visited told us that they did not believe they would continue to increase adoption levels and would therefore not earn future incentive payments, and one of these states had limited its use of incentive funds to one time, nonrecurring expenses. States have been developing a range of practices to address long-standing barriers to achieving permanency for children in a timely manner—many of which have been the subject of our previous reports. Both independently and through demonstration waivers approved by HHS,states are using a variety of practices to address barriers relating to the courts, recruiting adoptive families for children with special needs, placing children in permanent homes in other jurisdictions, and the availability of needed services. For example, with a demonstration waiver, Maryland is testing whether the provision of comprehensive and coordinated drug treatment services to parents will improve their access to services and reduce the length of time their children spend in foster care. Because few of these practices have been rigorously evaluated, however, limited information is available on their effectiveness. Our previous work, all the states we visited, and over half of our survey respondents identified problems with the court system as a barrier to moving children from foster care into safe and permanent homes. For example, 29 states reported in our survey that the child welfare system did not have enough judges or court staff, 28 reported that not enough training was available for judges or other court personnel, and 23 reported the existence of judges who were not supportive of ASFA’s goals. In 1999, we reported on systemic problems that hinder the ability of courts to produce decisions on child welfare cases in a timely manner that meet the needs of children. The barriers included inadequate numbers of judges and attorneys to handle large caseloads, the lack of cooperation between the courts and child welfare agencies, and insufficient training of judges and attorneys involved in child welfare cases. During our visit to Massachusetts, state officials told us that the courts experienced significant delays in court hearings and appeals due to a lack of court resources. As an alternative to court proceedings, Massachusetts implemented a permanency mediation program—a formal dispute resolution process in which an independent third party facilitates permanency planning between family members and potential adoptive parents in a nonadversarial setting. Three other states we visited—Texas, Oregon, and Maryland—have implemented similar mediation programs. By avoiding trials to terminate parental rights, Massachusetts officials reported that permanency mediation helps reduce court workloads and more effectively uses limited court resources. In addition, they told us that the mediation process eliminates appeals because a joint permanency decision is made between the birth parents and the adoptive parents that both parties can accept. For example, an open adoption between the birth and adoptive parents is a common outcome of permanency mediation, allowing the birth parents to continue some type of relationship with their child after adoption. A preliminary evaluation of the Massachusetts program suggested that cases involved in the mediation program needed less time and fewer court resources to reach an agreement than cases that go to trial. However, the evaluation did not directly compare outcomes, such as the length of time a child spent in foster care, for mediation and nonmediation cases. To improve collaboration between child welfare and court staff, two states we visited developed ongoing committees to address barriers to achieving permanency for children in foster care. For example, Massachusetts created a committee comprised of staff from the courts, the Attorney General’s office, and the child welfare agency to identify and address court delays affecting child welfare cases. This committee has studied delays in the process for appealing child welfare decisions and has implemented several changes to streamline the process. Illinois has several ongoing committees composed of court and child welfare agency staff to address a variety of legal barriers that delay the placement of a child in a safe and permanent home. Texas officials identified court barriers in rural areas that negatively affect both the timeliness and quality of child welfare proceedings—specifically, the lack of court time for child welfare cases and the lack of judges with training and experience in child welfare issues. In response to these barriers, Texas developed the visiting judge cluster court system, an approach in which a judge trained in child welfare issues is assigned to a cluster of rural counties. The judge travels from county to county presiding over all child welfare cases. This approach can create more court time in rural areas and allows knowledgeable and experienced judges to make the best possible decisions for children in foster care. While Texas officials believe this approach has been helpful in moving children to permanency, no formal evaluation of the approach has been conducted. Officials in five states we visited, along with the majority of the respondents to our state survey, reported that difficulties in recruiting families to adopt children with special needs is a major barrier to achieving permanent placements for these children. The National Center for Resource Family Support notes that the lack of foster and adoptive families to meet the needs of children in care is one of the biggest challenges facing child welfare agencies across the nation. In Texas and Illinois, social work staff and state officials noted that the children currently in foster care are older and have more severe problems, making it increasingly difficult in find adoptive homes for the children in care. Our survey revealed that states relied on three main activities to recruit adoptive families for children who are waiting to be adopted: listing a child’s profile on state and local Web sites, exploring adoption by adults significantly involved in the child’s life, and featuring the child on local television news shows. Other recruitment efforts cited by the states we visited included profiling children in need of adoptive families in local newspapers, holding regular meetings during which social workers across the state exchange information on children in their communities who need an adoptive family and local families available to adopt, and holding adoption parties during which children available for adoption are introduced to families who are waiting to adopt a child. In Massachusetts, the child welfare agency established a successful collaboration with a local company that sponsored adoption fairs for children with special needs, donated space for meetings, and provided advice on effective marketing techniques. In Illinois and Maryland, staff use databases to match children with a goal of adoption with families waiting to adopt a child. Several states we visited are also using recruitment campaigns targeted to particular individuals who may be more likely to adopt children with special needs. However, a report on recruitment efforts in Illinois noted that little information exists on what kinds of families are likely to adopt children with specific characteristics. The child welfare agencies in Maryland, North Carolina, Texas, and Illinois are collaborating with local churches to recruit adoptive families specifically for minority children. In addition, Illinois conducted a recruitment campaign at local hospitals to identify adoptive families for children with complex medical needs; however, of the 14 children adopted as a result of the campaign, only one had a complex medical need. While the states we visited used a variety of recruitment efforts to find families for special needs children, they generally did not collect data on the effectiveness of their recruiting efforts. During our site visit, Illinois social workers discussed the importance of consulting with people involved in a child’s life, such as coaches and teachers, to identify those who might be interested in adopting a child. However, the Illinois recruitment report found that many adoption workers did not have the experience or skills to carry out such child- specific recruitment activities effectively. To address this, the state has established a training program for social workers on specialized recruitment activities. ASFA requires states to document the individualized recruitment efforts undertaken for a child waiting for an adoptive family. The states we visited used several documentation methods, such as making notes in a child’s case record, using tracking forms, and using computerized databases that document all actions taken on a child’s case. For example, state officials in Oregon recently created a new document that social workers must use to record efforts made to recruit adoptive families for foster children. In Massachusetts, if a child has a goal of adoption and no identified adoptive family, the social worker is required to submit an electronic referral form within a specified timeframe to the regional recruitment office. In addition to the activities described above, some demonstration waivers are testing different approaches to finding permanent homes for children in foster care. Seven states are using demonstration waivers to pay subsidies to relatives and foster parents who become legal guardians to foster children in their care. These states hope to reduce the number of children in long-term foster care by formalizing existing relationships in which relatives or foster parents are committed to caring for a child but adoption is not a viable option. For example, older children may not consent to an adoption because they still have a relationship with their parents who are unable to care for them. In other cases, a grandmother may be committed to caring for her grandchildren, but may not want to be involved in terminating the parental rights of her child. Evaluation results from Illinois’s waiver suggest that offering subsidized guardianship can increase the percentage of children placed in a permanent and safe home. Results from most of the other guardianship waiver projects are not yet available. Texas is using a waiver to test a new strategy for placing children in adoptive homes, with a goal of recruiting more prospective adoptive families and increasing the percentage of children with a filed or approved TPR that are placed in adoptive families. Texas hopes to better match children and families and improve the stability of these placements by providing training for potential adoptive families and having mental health professionals assess the child’s readiness to bond with a family and the family’s ability to meet the emotional needs of the child. This project was implemented in 2001 and preliminary evaluation results are expected by the end of 2003. Many states encounter long-standing barriers in placing children with adoptive families in other states and across jurisdictions within the same state. As we reported previously, these interjurisdictional adoptions take longer and are more complex than adoptions within the same child welfare jurisdiction. Interjurisdictional adoptions involve recruiting adoptive families from other states or other counties within a state, conducting comprehensive home studies of adoptive families in one jurisdiction, sending the resulting home study reports to another jurisdiction, and ensuring that all required legal, financial, and administrative processes for interjurisdictional adoptions are completed. Five states we visited reported frequent delays in obtaining from other states the home study reports necessary to place a child with a potential adoptive family in another state. According to recent HHS data, children adopted by out-of-state families typically spend about 1 year longer in foster care than children adopted by in-state families. Child welfare agencies have implemented a range of practices to facilitate adoptions across state and county lines. In our survey, the most common practices for recruiting adoptive families in other jurisdictions in fiscal year 2000 included publicizing profiles of foster children on Web sites, presenting profiles of children in out-of-state media, and contracting with private agencies to recruit adoptive parents in other states. The majority of states using these strategies rated them as very or somewhat effective. States have also developed practices to expedite the completion of home studies and shorten the approval processes for interstate adoptions. The two primary practices cited by states on our survey were working with neighboring states to facilitate interstate placements and contracting with private agencies to conduct home studies in other states. Other practices cited by a smaller number of states include increasing the number of staff to work on and approve interstate placements, using home study forms similar to the ones used in other states, and developing agreements with other states to allow social workers to perform home studies across state lines. In rating these practices, states reported in our survey that increasing the number of staff was the most successful strategy and using common home study forms was the least effective solution. States we visited have implemented several of these practices to overcome barriers to interjurisdictional adoptions. In Oregon, the state child welfare agency works with neighboring states in the Northwest Adoption Exchange to recruit adoptive parents for children with special needs. In Texas, the state contracts with private agencies to place foster children with out-of-state adoptive families. In Illinois, the state works with a private agency in Mississippi to conduct home studies because many Illinois children are adopted by families in Mississippi. Officials in four states told us that making decisions about a child’s permanent home within a year is difficult if the parent has not had access to the services necessary to address their problems, particularly substance abuse treatment. We have previously reported on barriers to working with parents who have a substance abuse problem, including inadequate treatment resources and a lack of collaboration among substance abuse treatment providers and child welfare agencies. Similarly, 33 states reported in our survey that the lack of substance abuse treatment programs is a barrier to achieving permanency for children. To address this issue, four states have developed waiver projects to address the needs of parents with substance abuse problems. By testing ways to engage parents in treatment and to provide more supportive services, these states hope to increase the number of substance abusing parents who engage in treatment, increase the percentage of children who reunify with parents who are recovering from a substance abuse problem, and reduce the time these children spend in foster care. For example, Delaware’s waiver funds substance abuse counselors to help social workers assess potential substance abuse problems and engage parents in treatment. The final evaluation report, published in March 2002, concluded that while the project did not achieve many of its intended outcomes, one-third of families in the project were effectively linked to substance abuse treatment, foster children in the project spent 14 percent less time in foster care than similar non-waiver children, and total foster care costs were reduced. Interim evaluation results for two of the other states are expected by the end of 2002. The fourth state will not have interim results until 2004. Two states we visited are working to improve access to services and collaboration among service providers through a collaborative approach called family group conferencing. Oregon law requires the child welfare agency to consider holding a family conference within the first 30 days a child is in care. At these meetings, parents, relatives, child welfare agency staff, and other professionals, such as therapists, work together to develop appropriate plans that address the child’s need for safety and permanency and to ensure that the family has access to services needed to implement the permanency plan. North Carolina uses similar meetings for children who are at risk of being placed in foster care, during which the child’s birth family, relatives, and other involved adults develop plans for protecting the child, which must be approved by the child welfare agency. In both states, the goal of these meetings is to empower families to participate in the planning process for their children and to foster cooperation and communication between families and the child welfare professionals. While North Carolina officials believe the family conferences have been useful, little data exists to demonstrate whether children who are the subjects of these meetings have better outcomes than other children in the child welfare system. Several states, including North Carolina, have incorporated family group conferencing into their waiver projects and may produce some information on the effectiveness of this approach. Most of the states we visited reported that ASFA has played an important role in helping them focus on achieving permanency for children within the first year that they enter foster care. However, numerous problems with existing data make it difficult to assess at this time how outcomes for children in foster care have changed since ASFA was enacted. While an increasing number of children have been placed in permanent homes through adoption during the last several years, we know little about the role ASFA played in the adoption increases or other important outcomes, such as whether children who reunify with their families are more or less likely to return to foster care or whether these adoptions are more or less stable than adoptions from previous years. The availability of reliable data, both on foster care outcomes and the effectiveness of child welfare practices, is essential to efforts to improve the child welfare system. In the past few years, HHS and the states have taken important steps to improve the data available to assess child welfare operations. In addition, evaluation data from the demonstration waivers should be available in the next few years, providing key information on child welfare practices that are effective and replicable. However, important information about ASFA’s impact on children in foster care is still unavailable. For example, the lack of comprehensive and consistent data regarding the fast track and 15 of 22 provisions make it difficult to understand the role of these new provisions in reforming the child welfare system and moving children into permanent placements. To obtain a clearer understanding of how ASFA’s two key permanency provisions are working, we recommend that the Secretary review the feasibility of collecting data in the most cost-effective way on states’ use of ASFA’s fast track and 15 of 22 provisions. Information, such as the number of children exempted from the 15 of 22 provision and the reasons for the exemptions, could help HHS better target its limited resources to key areas where the states may need assistance in achieving ASFA’s goals. We obtained comments on a draft of this report from the Department of Health and Human Services’ Administration for Children and Families (ACF). These comments are reproduced in appendix III. ACF also provided technical clarifications, which we incorporated when appropriate. ACF generally agreed with the findings of our report, pointing out the difficulty in understanding ASFA’s effect on child welfare outcomes, given that many states had implemented child welfare reforms prior to ASFA and that some states implemented ASFA more quickly than others. ACF also said that states continue to revise AFCARS data for early as well as for recent years, thereby improving the accuracy of the information. ACF concurred with our recommendation and reported that it has established a team to review AFCARS data issues. This team plans to evaluate the feasibility of including data on ASFA’s fast track and 15 of 22 provisions in the AFCARS system. ACF also noted that states are required to report the number of terminations of parental rights and use of exceptions in the statewide assessment portion of their CFSR. However, the statewide assessment form states are required to complete prior to the CFSR does not request data on the number of TPRs filed and does not specifically request data on the state’s use of the 15 of 22 provision. Instead, it asks states to discuss the extent to which the state complies with the 15 of 22 provision. Four of the states we visited had undergone a CFSR prior to our site visit and we reviewed the statewide assessment forms they submitted to HHS. Two states provided some data on their use of the 15 of 22 provision in their statewide assessment and two states did not. In addition, few states were able to provide this data in response to our survey, including states that have undergone a CFSR. We also provided a copy of our draft to child welfare officials in the six states we visited (Illinois, Maryland, Massachusetts, North Carolina, Oregon, and Texas). Illinois, Maryland, and Texas generally agreed that the draft accurately portrayed the experiences of their states. Oregon and North Carolina provided a few technical comments to clarify information presented about their states, which we incorporated when appropriate. In addition, Oregon determined that it had submitted inaccurate data for a survey question that appeared in a table in the report. We revised the table based on its corrected data submission. Massachusetts did not provide any comments. We are sending copies of this report to the Secretary of Health and Human Services, state child welfare directors, and other interested parties. We will make copies available to others on request. If you or your staff have any questions or wish to discuss this material further, please call me at (202) 512-8403 or Diana Pietrowiak at (202) 512-6239. Key contributors to this report are listed in appendix IV. To determine how the characteristics of children in foster care and their outcomes, such as adoption, have changed since ASFA was enacted, we reviewed national child welfare data sets and statistical reports. Specifically, we examined data from HHS’s Adoption and Foster Care Analysis and Reporting System (AFCARS) for federal fiscal years 1998, 1999, and 2000. To understand these data in a historical context, we reviewed early child welfare data from the Voluntary Cooperative Information System (VCIS) administered by the American Public Human Services Association (formerly known as the American Public Welfare Association). In addition, we reviewed longitudinal analyses of child welfare trends from the Chapin Hall Center for Children at the University of Chicago. To gauge how useful states have found ASFA’s fast track and 15 of 22 provisions, as well as to explore foster care outcomes in greater detail, we surveyed all 50 states and the District of Columbia. We pretested the survey instrument in Delaware and Vermont and received input from HHS officials. In November 2001, we sent a copy of the survey to the child welfare director in each of the 50 states and the District of Columbia. We received responses from 45 state agencies and the District of Columbia. While we requested survey data for federal fiscal years 1999 and 2000, in some cases, states provided data for alternative timeframes. Twenty-four states reported data by federal fiscal year; 2 states reported data by calendar year; and 20 states used a combination of reporting periods, including federal fiscal year, state fiscal year, and calendar year. We did not independently verify the information obtained through the survey. In addition, we visited 6 states to obtain more detailed and qualitative information regarding ASFA’s effect on state child welfare agencies. We conducted site visits in Illinois, Maryland, Massachusetts, North Carolina, Oregon, and Texas. We selected these states to represent a range of geographic locations, performance under the adoption incentive program, and child welfare system innovations. During our site visits, we interviewed state and local child welfare staff, nonprofit service providers, and judges. We also collected and reviewed relevant documentation from these site visits. To determine how states are spending new adoption-related funds provided by ASFA, we included questions on this issue in our national survey. We also reviewed descriptions of adoption incentive payment and PSSF adoption promotion and support services fund expenditures in excerpts of the Annual Progress and Services Reports states submitted to the Children’s Bureau in June 2001. As a supplement to these reports, we gathered information on the use of these funds from regional ACF contacts and during our site visits. In addition, we reviewed related reports from the Cornerstone Consulting Group, Inc. and James Bell Associates. To identify what states are doing to address barriers to achieving permanency, we interviewed HHS officials and child welfare experts, as well as addressed this issue in our national survey and 6 site visits. The child welfare experts we spoke with included individuals from the Child Welfare League of America, the National Adoption Center, the American Public Human Services Association, the Dave Thomas Foundation for Adoption, the Urban Institute, the Center for Law and Social Policy, and the Association of Administrators of the Interstate Compact on the Placement of Children. We also reviewed relevant child welfare reports, such as the National Governor’s Association report on best practices and the Cornerstone Consulting Group, Inc.’s report on HHS’s child welfare waivers. We conducted our work between June 2001 and April 2002 in accordance with generally accepted government auditing standards. Eighteen states are currently using Title IV-E demonstration waivers to test child welfare innovations, such as providing extensive post adoption services to encourage adoptions and maintain their stability. However, most of the evaluation results from the current waivers are not yet available. The first waivers were approved in 1996, but the waiver projects last for 5 years and many of them were not implemented until 1999 or later. An HHS official also told us that some of the waivers experienced unexpected difficulties and took longer to implement than expected. As a result, about half of the waiver projects have not yet submitted interim evaluation results. In addition, some of the waiver projects have enrolled fewer participants than expected, which has delayed the availability of conclusive evaluation results. Final evaluation results for the first three waiver projects approved are expected to be published this year (see table 14 for a list of waiver projects and when their evaluations are expected). According to an HHS official, subsidized guardianship is the only waiver practice that has sufficient evidence thus far to warrant the consideration of policy changes to support the broader use of this practice. While some other waivers may have promising preliminary results, none are strong enough to warrant a change in policy at this time. The waivers currently underway focus primarily on four practice areas: subsidized guardianship, managed care approaches, services for substance abusing parents, and the flexible use of Title IV-E funds. Seven states are using waivers to pay subsidies to relatives and foster parents who become legal guardians to foster children in their care. These states hope to reduce the number of children in long-term foster care by formalizing existing relationships in which relatives or foster parents are committed to caring for a child but adoption is not a viable option. This option is considered useful particularly for older children and children placed with relatives. Results from Illinois’s waiver suggest that offering subsidized guardianship can increase the percentage of children placed in a permanent and safe home without reducing the number of children being adopted. Results from most of the other guardianship waiver projects are not available either because the project just started or because too few children have participated in the waiver project thus far. Five states are testing managed care approaches for financing child welfare services. Under these waivers, states and localities prospectively pay fixed amounts to providers to coordinate and meet all the service needs of referred children. For example, Connecticut is using a managed care approach for children between the ages of 7 and 15 with severe behavioral and mental health problems. The state pays a fixed fee to agencies to provide and coordinate services for referred children with the goal of placing them in the least restrictive setting and reducing the time they spend in foster care. Preliminary findings from a 1-year period indicate that children in the waiver project were less likely to be placed in restrictive foster care settings and psychiatric hospitals, compared to similar children who were not in the waiver project. Results from the other managed care projects are not yet available, primarily because the projects were only recently implemented. Four states developed waiver projects to address the needs of parents with substance abuse problems. Using these waivers, the states hope to increase the number of substance abusing parents who engage in treatment, increase the percentage of children who reunify with parents who are recovering from a substance abuse problem, and reduce the time these children spend in foster care. For example, Delaware has used Title IV-E funds to pay for a substance abuse counselor to accompany social workers who investigate allegations of abuse or neglect. The substance abuse counselor assists in assessing potential substance abuse problems and engaging parents in treatment. Final evaluation results were published in March 2002 and concluded that the project successfully engaged parents in substance abuse treatment and resulted in foster care cost savings, although it did not achieve many of its intended outcomes. For example, children participating in the waiver project spent 14 percent less time in foster care than similar children who were not part of the waiver project, although the project’s goal was a 50 percent reduction. Interim evaluation results for two of the other states are expected by the end of 2002. The fourth state will not have interim results until 2004. Four states have designed waiver projects allowing counties or other local entities to use Title IV-E funds more flexibly for prevention and community-based services not traditionally reimbursed by Title IV-E, with the goal of preventing foster care placements and facilitating reunification. These waivers provide counties with a fixed Title IV-E budget and allow them to provide any services that will improve outcomes for children. For example, Indiana counties involved in the waiver provided a variety of services, including in-home family counseling, child care, mentoring, respite services, and financial assistance, such as paying for transportation or utilities. Preliminary results from Indiana indicate that children in waiver counties spent less time in foster care, were more likely to be reunified, and were less likely to re-enter care compared to similar children in nonwaiver counties. In contrast, preliminary analyses from Oregon do not demonstrate any significant differences in reunification rates or the incidence of re-abuse after reunification between children who participated in the waiver program and similar children who did not. In North Carolina, a preliminary report indicates that waiver counties are experiencing a reduction in first time entry into foster care compared to non-waiver counties; however, the report also points out that further analysis is necessary to demonstrate that this outcome is a result of the waiver activities. North Carolina reports that findings on the reduction in length of stay and re-entry into care are inconclusive at this time. In addition to those named above, Melissa Emrey-Arras, Danielle Jones, Sara L. Schibanoff, and Jennifer Torr-Smith made key contributions to this report. Joel Grossman and Corinna Nicolaou also provided key technical assistance. Barth, Richard P., Deborah A. Gibbs, and Kristin Siebenaler. Assessing the Field of Post-Adoption Service: Family Needs, Program Models, and Evaluation Issues. A literature review prepared at the request of the Department of Health and Human Services. April 10, 2001. Barth, Richard P., and others. “Contributors to Disruption and Dissolution of Older-Child Adoptions.” Child Welfare, vol. LXV, no. 4 (1986): 359-371. Congressional Research Service. Child Welfare: Implementation of the Adoption and Safe Families Act. Washington, D.C., 2001. Cornerstone Consulting Group, Inc. A Carrot Among the Sticks: The Adoption Incentive Bonus. Houston, 2001. Cornerstone Consulting Group, Inc. Child Welfare Waivers: Promising Directions, Missed Opportunities. Houston, 1999. Elmore, Jane and Diane DeLeonardo. Report on the Status Of Foster and Adoptive Parent Recruitment In the Illinois Child Welfare System. N.p., 2002. Festinger, Trudy. After Adoption: A Study of Placement Stability and Parents’ Service Needs. New York: Shirley M. Ehrenhranz School of Social Work, New York University, 2001. Goerge, Robert M., and others. Adoption, Disruption, and Displacement in the Child Welfare System, 1976-1995. Chicago: The Chapin Hall Center for Children at the University of Chicago, 1995. Harden, Allen, Fred Wulczyn, and Robert Goerge. Adoption from Foster Care: The Dynamics of the ASFA Foster Care Population. Chicago: The Chapin Hall Center for Children at the University of Chicago, 1999. James Bell Associates. Analysis of States’ Annual Progress and Services Reports and Child and Family Services Plans (1999-2001). Arlington, Va., 2002. Maza, Penelope L. “Recent Data on the Number of Adoptions of Foster Children.” Adoption Quarterly, vol. 3 (1999): 71-81. National Governors’ Association Center for Best Practices. A Place to Call Home: State Efforts to Increase Adoptions and Improve Foster Care Placements. Washington, D.C., 2000. Oppenheim, Elizabeth, Shari Gruber, and Doyle Evans. Report on Post- Adoption Services in the States. Washington, D.C.: The Association of Administrators of the Interstate Compact on Adoption and Medical Assistance, Inc., 2000. U.S. Department of Health and Human Services, Administration for Children and Families, Administration on Children, Youth and Families, Children’s Bureau. Child Welfare Outcomes 1999: Annual Report. Washington, D.C., n.d. Wulczyn, Fred H., and Kristin Brunner Hislop. Foster Care Dynamics in Urban and Non-Urban Counties. An issue paper from the Chapin Hall Center for Children at the University of Chicago at the request of the U.S. Department of Health and Human Services. February 2002. Wulczyn, Fred H., and Kristin Brunner Hislop. Growth in the Adoption Population. An issue paper from the Chapin Hall Center for Children at the University of Chicago at the request of the U.S. Department of Health and Human Services. March 2002. Wulczyn, Fred H., Kristen Brunner Hislop, and Robert M. Goerge. An Update from the Multistate Foster Care Data Archive: Foster Care Dynamics 1983-1998. Chicago: Chapin Hall Center for Children at the University of Chicago, 2000. Child Welfare: New Financing and Service Strategies Hold Promise, but Effects Unknown. GAO/T-HEHS-00-158. Washington, D.C.: July 20, 2000. Foster Care: HHS Should Ensure That Juvenile Justice Placements Are Reviewed. GAO/HEHS-00-42. Washington, D.C.: June 9, 2000. Foster Care: States’ Early Experiences Implementing the Adoption and Safe Families Act. GAO/HEHS-00-1. Washington, D.C.: December 22, 1999. Foster Care: HHS Could Better Facilitate the Interjurisdictional Adoption Process. GAO/HEHS-00-12. Washington, D.C.: November 19, 1999. Foster Care: Effectiveness of Independent Living Services Unknown. GAO/HEHS-00-13. Washington, D.C.: November 5, 1999. Foster Care: Kinship Care Quality and Permanency Issues. GAO/HEHS-99-32. Washington, D.C.: May 6, 1999. Foster Care: Increases in Adoption Rates. GAO/HEHS-99-114R. Washington, D.C.: April 20, 1999. Juvenile Courts: Reforms Aim to Better Serve Maltreated Children. GAO/HEHS-99-13. Washington, D.C.: January 11, 2000. Child Welfare: Early Experiences Implementing a Managed Care Approach. GAO/HEHS-99-8. Washington, D.C.: October 21, 1998. Foster Care: Agencies Face Challenges Securing Stable Homes for Children of Substance Abusers. GAO/HEHS-98-182. Washington, D.C.: September 30, 1998. Foster Care: State Efforts to Improve the Permanency Planning Process Show Some Promise. GAO/HEHS-97-73. Washington, D.C.: May 7, 1997. Child Welfare: States’ Progress in Implementing Family Preservation and Support Activities, GAO/HEHS-97-34. Washington, D.C.: February 18, 1997. Permanency Hearings for Foster Children. GAO/HEHS-97-55R. Washington, D.C.: January 30, 1997. Child Welfare: Complex Needs Strain Capacity to Provide Services. GAO/HEHS-95-208. Washington, D.C.: September 26, 1995. Child Welfare: HHS Begins to Assume Leadership to Implement National and State Systems. GAO/AIMD-94-37. Washington, D.C.: June 8, 1994.
In response to concerns about the length of time children were spending in foster care, Congress enacted the Adoption and Safe Families Act of 1997 (ASFA). The act contained two key provisions intended to help states more quickly move the more than 800,000 children estimated to be in foster care each year to safe and permanent homes. One of these provisions, referred to as "fast track," allows states to bypass efforts to reunify families in certain egregious situations. The other provision, informally called "15 of 22," requires states to file a petition to terminate parental rights when a child has been in foster care for 15 of the most recent 22 months. Although the number of adoptions has increased by 57 percent since the act was enacted, changes in other foster care outcomes and the characteristics of children in foster care cannot be identified due to the lack of comparable pre- and post-ASFA data. Although data on states' use of the act's two key performance provisions are limited, some states described circumstances that hinder their use. Survey data suggest that a few states used the fast track provision infrequently. In general, states are most frequently using the new adoption-related funds provided by the act to recruit adoptive parents and provide post adoption services. The states involved in the survey are addressing long-standing barriers to achieving permanency for foster children such as court delays and insufficient court resources, difficulties in recruiting adoptive families for children with special needs, obstacles and delays in placing children in permanent homes in other jurisdictions, and poor access to some services families need to reunify with their children. States are testing different approaches, but the data are limited on the effectiveness of these practices.
The Army is DOD’s single manager for the military services’ conventional ammunition and is responsible for ensuring that an adequate industrial base is maintained to meet the services’ ammunition requirements. The conventional ammunition requirements include about 250 end items and 500 components that are grouped into 14 different families. These requirements are derived by adding the projected training, testing, and pipeline requirements to the war reserve requirement that is needed for combat. Since Operation Desert Storm, ammunition requirements have decreased substantially, and the reduced threat and changing conflict scenarios caused war reserve requirements to decline by more than 70 percent between 1992 and 1994. In the past 20 years, DOD’s ammunition planning strategy has changed dramatically. Before July 1976, the services stocked enough items to support combat consumption from the day military operations begin to when the production rate for an item equals combat consumption. Beginning in July 1976, the services were to stock enough items to meet the first 6 months of combat consumption and the industrial base was assumed to be able to take over supply at that time. If industry could respond before the sixth month, then reserve item requirements were to be reduced accordingly. However, if industry could not respond by the sixth month, industrial preparedness actions necessary to make such a response possible were to be identified for funding. The 1978 Program Objective Memorandum (POM) guidance allowed sizing of the industrial base to meet total mobilization requirements. The 1979 POM guidance reduced the allowable size of new facilities to essentially that required to support an 180-day requirement. The 1980 POM guidance further reduced allowable sizing to a 90-day requirement. This guidance was interpreted to limit sizing of new facilities in support of new munitions to that which would support production for the Five-Year Defense Plan. This guidance began the movement away from surge planning. After the collapse of the former Soviet Union and the end of the Cold War, requirements dropped again. As the prospects for a long drawn out global war declined, DOD continued to reduce its ammunition requirements. Surge involved emphasis on expediting the completion of items already in process rather than sustaining production because its only purpose was to preclude serious depletion of war reserve stocks in a short, intense war. The emphasis had shifted away from huge stockpiles and an industrial base with a large surge capacity to a “come as you are” philosophy. Stockpile requirements declined as DOD planned primarily for major regional conflicts rather than a global war. Surge capacity lost its importance because the conflicts were assumed to be so short in duration that a surging base would not be able to make a significant difference. The key measurement of the health of the industrial base became the length of time required to replenish the stockpile after two major regional conflicts. DOD’s war reserve requirements are now based on the need to fight two nearly simultaneous major regional conflicts. Key assumptions in this new plan are (1) each conflict will be intense and short in duration (60 to 120 days); (2) the military will rely on existing stocks for the entire duration of the conflicts; (3) there will not be a significant surge in ammunition production during the conflicts; and (4) following the conflicts, ammunition items will be replenished to a designated level within a specified time frame, to prepare for the next conflict. Using the two-conflict scenario, the military services compute war reserve requirements based on target kill data from computer simulation models and from logistics distribution figures. After the Cold War, the Army Materiel Command studied the services’ ammunition industrial base needs in light of the diminished threat that had led to force reductions and reduced ammunition requirements. In April 1991, the study results were published, and the Command concluded that the base needed to be consolidated and reduced in size. The Army used this study to develop its ammunition facility strategy for the 21st century (AMMO-FAST-21), a strategy that supports reduced peacetime ammunition requirements while maintaining the highest level of readiness possible for future contingency operations. In August 1993, an independent study team from the American Defense Preparedness Association—two retired military officers and four corporate managers with more than 30 years experience dealing with ammunition—endorsed the Army’s AMMO-FAST-21 strategy. The strategy prioritizes ammunition item families and identifies the facilities that provide the most production flexibility. It attempts to minimize expenditures by reshaping the industrial base to its minimum essential size. Redundancy within the base is limited, and excess government facilities are disposed of or leased to commercial firms. AMMO-FAST-21 also attempts to preserve the balance between government and commercial facilities and to maintain the critical equipment, processes, and skilled personnel at both types of facilities. The strategy is being implemented through government-owned, group technology centers and specified mission facilities and through commercial facilities. AMMO-FAST-21 established a restricted specified base of privately owned facilities that DOD can contract with directly for critical items and components. The ammunition industrial base has experienced a dramatic drop in its production capacities. The relative percentages of ammunition procurement dollars going to government and commercial producers, however, have remained relatively constant since 1987. In addition, recent closures of production facilities have closely reflected those projected by the Army when it submitted its 1991 Production Base Planning Study and 1993 update to Congress. DOD’s primary means of maintaining the industrial base is through the direct procurement of hardware—ammunition end items and components—but it also procures services for the layaway of production facilities, the maintenance of inactive facilities, and the demilitarization of ammunition. This report uses the term procurement funding to refer to the procurement of ammunition end items and components only. The ammunition industrial base has experienced dramatic changes over the last 17 years. Less than 50 percent of the production facilities that existed in 1978 still exist today, and production capacity is declining for all 14 families of ammunition. However, the mix of procurement funding between government-owned and contractor-owned production facilities has remained relatively stable since 1987, with contractor-owned facilities receiving about 65 percent of the funding. Decreased funding has led to reductions and consolidations in both the government and private sectors of the industrial base. As shown in table 1, the numbers of government-owned, government-operated (GOGO); government-owned, contractor-operated (GOCO); and contractor-owned, contractor-operated (COCO) ammunition plants have all declined significantly since 1978. There also has been a corresponding decline in the commercial subcontractors that supply parts to the ammunition industry. As table 1 shows, commercially operated production facilities have experienced more closures than government-operated production facilities. However, the closures closely reflect those projected by the Army when it submitted its 1991 Production Base Planning Study and 1993 update to Congress. Since the end of Operation Desert Storm, ammunition production capacity in the United States has steadily declined. According to both military and industry projections, this trend will continue for several more years before capacity stabilizes within a smaller industrial base. In fiscal year 1990, the Army did production planning for 329 end items that were not commercially available. By fiscal year 1995, the number had dropped to 163. Indirect fire munitions are used to suppress enemy fire in addition to killing targets and have historically constituted a larger portion of the war reserve inventory than direct fire munitions. Indirect fire munitions continue to make up the largest portion of the war reserve inventory, but as the war reserve requirements have decreased (from 2,500,000 short tons in 1992 to 650,000 short tons in 1994), the percentage of direct fire ammunition has increased. The indirect fire portion of the ammunition stockpile is likely to continue its decline. Table 2 shows production capacity for indirect fire systems, such as artillery, is declining much faster than production capacity for direct fire systems, such as tanks. The ammunition industrial base has downsized considerably since 1987 as a result of significant reductions in ammunition procurement funding (from about $4 billion in fiscal year 1986 to about $1.2 billion in fiscal year 1996). However, the funding split between government-owned and contractor-owned facilities has remained fairly steady over these years. In fiscal year 1987, government-owned facilities received 35 percent of the procurement funding and contractor-owned facilities received the remaining 65 percent. In fiscal year 1994, the numbers were 32 percent and 68 percent, respectively (see table 3). DOD considers these percentages “very reasonable” and expects them to remain steady in the future. Likewise, in its May 1994 Conventional Munitions Assessment Report, the Munitions Industrial Base Task Force stated that “the public/private mix of production work is approximately correct.” In commenting on this report, DOD noted that the distinction between GOCO and COCO facilities is blurring as the government leases inactive facilities to commercial contractors. The key role of the ammunition industrial base is to replenish the ammunition stockpile. In peacetime, the industrial base replenishes ammunition that is used for military training and testing. It also makes up shortages of war reserve items and supplies new types of ammunition to the stockpile. Since the major regional conflicts envisioned in the Defense Planning Guidance are short in duration, the ammunition industrial base is not required to surge during the conflicts. However, according to the Defense Planning Guidance, the key measure of the health of the base is its ability to replenish the stockpile following two major regional conflicts. While the services have shortages of many ammunition items, very few of these shortages appear to be due to inadequate production capacity. We discussed a random sample of 152 of the 752 items that had shortages with service officials to determine whether these shortages were attributable to industrial base problems. In addition, we asked them if they knew of any additional items that had shortages due to industrial base problems. None of the 152 items had shortages that service officials considered attributable to industrial base problems. However, Army officials identified three other items as having shortages attributable to industrial base issues, and Marine Corps officials identified four items. Most of these shortages appear to be minor and can be quickly corrected in an emergency by using substitute munitions or increasing production rates. Most ammunition production lines currently operate for one or two 8-hour shifts per day, 5 days per week. These production lines could run three shifts per day, 5 days per week, but worker fatigue and required maintenance of the equipment would prevent long-term continuous operation of the production lines. The first item with an industrial base-related shortage is the 155-mm Copperhead projectile. According to DOD, the supplier base and the technical ability to manufacture Copperhead parts have disappeared. Several years ago, military industrial base planners decided not to maintain a production capacity for the Copperhead because the round is expensive, requirements are low, the cost of maintaining a production line in layaway status would be prohibitive, and there are substitute items being developed. One substitute is the 155-mm Sense and Destroy Armor projectile, currently in low-rate initial production. The second and third items are the M58A3 and M59 mine clearing charges. These shortages result from an inadequate supply of the C-4 explosive that is used in the charges. Because C-4 is used in four other types of ammunition that require about 1 pound of C-4 for each round and the mine clearing charges require about 500 pounds of C-4 for each charge, the Army has allocated the available C-4 to the four other types of ammunition. The Army has no plans to increase C-4 production capacity because of cost. However, if an emergency arises, substitute explosives can be produced, and the Army can increase its production of C-4 by adding shifts to its current production line or it can use the C-4 from the other ammunition items. The fourth item is the 120-mm M830A1 high explosive antitank round, which is used by both the Army and the Marine Corps. The Army is planning one more procurement for this round and will layaway the production line after that procurement because it will have an adequate supply of the ammunition. However, the Marine Corps currently has a shortage of M830A1 rounds and is not scheduled to procure any more of them due to funding priorities. According to the Army, the production line for this round will be inactive after its final procurement, but the Army will still be able to produce this ammunition on short notice for the next 2 years. A quick production response is possible because the 120-mm tank training rounds and the M829A2 kinetic energy round will remain in active production through fiscal year 1998. In commenting on this report, DOD said without future buys, the entire tank ammunition base would be jeopardized, not just the M830A1 rounds. The fifth item is the 81-mm infrared illumination round. The manufacturer that developed this item declined further orders after supplying the Army with a quantity sufficient for a year. The Army is working toward establishing a production capability for this item at Crane Army Ammunition Activity, and it plans to load, assemble, and pack the round at Pine Bluff Arsenal. The last two items are the M821 and M889 81-mm high explosive mortar rounds. At the time of our review, the production line for these two rounds was shut down while engineers corrected a problem with the propellant charge. In addition, an engineering change proposal was pending that could delay production. However, according to Army officials, a fully automated production line that is presently in layaway status could be restarted if necessary. When we discussed the ammunition shortages caused by industrial base problems with service officials and reviewed DOD’s industrial base studies, we did not identify any industrial base problems that would keep the military from fighting two major regional conflicts, as required by the current Defense Planning Guidance, or from replenishing the stockpile. However, ammunition shortages that result from funding problems will not be filled by a surging industrial base because the current guidance does not require the base to have a surge capability, as in the past. DOD officials stated that shortages of preferred munitions will be likely if two major regional conflicts arise and that shortages will be met with substitute munitions. This substitution is in accordance with the current Defense Planning Guidance. Army officials stated that although the industrial base is able to meet the replenishment requirements following a major regional conflict, replenishment is likely to be costly. Because production facilities for new items are being built for efficient production at peacetime requirement levels, funds will be required to expand some of these facilities to meet replenishment requirements. DOD’s assessment of the adequacy of the industrial base is based on the results of several studies, the annual functional area analysis, and ongoing production planning efforts, including the single manager’s June 1995 Production Base Plan. Two of the key studies were DOD’s 1994 and 1995 studies that attempted to evaluate the financial viability of all the firms comprising the industrial base. Although DOD did not receive responses from all the firms in the base, between the two studies it captured adequate financial data for the firms holding most of the base’s production capacity. From the data, DOD concluded that the industrial base was adequate to meet the services’ ammunition requirements. In 1994, DOD attempted to evaluate the financial status of 102 key commercial producers and assess their projected financial viability during the 1995 through 1997 time frame based on the firms’ profitability in 1992 and DOD’s planned future ammunition spending. DOD obtained some financial data for about 80 firms but received enough financial data to perform break-even analyses for only 57 companies. The 57 firms that were fully evaluated held about 75 percent of the production capacity in the ammunition industrial base, according to Army officials. DOD assumed that the remaining 45 firms were financially viable, even though it did not have enough financial data to perform break-even analyses. While the validity of this assumption is open to question, it is important to note that DOD could not compel the firms to provide the requested information and none of the 45 firms were single or sole source producers. DOD’s break-even analyses revealed that 16 of the 57 firms needed more detailed evaluations, based on their projected financial viability for 1995 through 1997. After further evaluation, DOD found that the production capabilities of most of the 16 firms could be absorbed by the remaining producers within the ammunition sector. However, three of the firms were single source producers. DOD concluded that if these three firms went out of business, their production capabilities could not be absorbed by the remaining producers within the ammunition sector. Therefore, DOD is continuing to monitor these firms to ensure it retains its necessary production capacity. In 1995, at the urging of the Munitions Industrial Base Task Force, DOD conducted another financial viability study of the ammunition industrial base. This study was broader in scope than the 1994 study, covering 154 firms that the task force had identified as part of the industrial base. DOD sent out surveys requesting financial data to all 154 firms, but only 29 firms responded in a timely manner. DOD officials attributed this low response rate to two reasons. First, DOD did not pay the contractors for this information. Second, many of the contractors had provided the same information the year before, for DOD’s 1994 study. Once again, DOD assumed firms that did not submit timely responses were financially viable. The 29 firms with timely responses comprised only about 35 percent of the industrial base production capacity. Of the 29 respondents, 19 were identified to be at financial risk. Secondary screenings that were done on these firms from an industrial base perspective disclosed that none were essential to the industrial base. Therefore, no detailed on-site reviews were conducted. During its two surveys of ammunition producers, DOD assumed that nonresponding firms were financially viable. DOD said this was a reasonable assumption because the purpose of the survey was to identify firms that would exit the business without special DOD action. DOD stated that firms facing financial difficulties would be inclined to complete the financial viability surveys. Most of the firms that did not complete the survey were the smaller firms in the industry. If the key assumptions in the Defense Planning Guidance and DOD’s industrial base studies are correct, the industrial base will be capable of simultaneously supplying peacetime ammunition needs and replenishing the ammunition stockpile as required, following one or two major regional conflicts. However, the ability of the industrial base to adequately respond to the military’s replenishment requirements depends heavily on both the amount of ammunition that must be replenished and the time period over which the replenishment is to occur. Thus, if the response period is shortened, or if the required replenishment level is raised from that stated in current guidance, the industrial base may not be able to adequately respond to replenishment requirements. The Army’s annual functional area analyses help to illustrate the role replenishment levels and time frames play in assessments of the industrial base. The 1994 analysis painted a bleak picture of the industrial base’s replenishment capability. However, in the 1995 analysis, the base’s replenishment capability improved dramatically. While part of the improvement was due to increased funding, much of the improvement was caused by changes in the replenishment levels and time frames. Army officials acknowledged that future changes in readiness requirements could affect their assessment of the industrial base’s viability. In addition, they pointed out that once the existing industrial base is disposed of, there is a long time and a high cost involved in reestablishing it. In addition to the DOD industrial base studies, several private organizations have studied the industrial base. However, most of the private studies have concluded that the industrial base is inadequate to meet the services’ ammunition requirements. One such study was completed in June 1994 by the Committee for the Common Defense, the national security arm of the Alexis de Tocqueville Institution. The study concluded that the nation’s ammunition industrial base was “rapidly-deteriorating.” The report based this conclusion primarily on the Korean War experience, but it also pointed out that the 323,000 tons of preferred munitions in the current U.S. stockpile represented less than the amount of ammunition sent to the Persian Gulf region in 1990 and 1991 for Operation Desert Storm. A private study conducted for the Munitions Industrial Base Task Force also found that the ammunition industrial base could not repeat the performance of Operations Desert Shield and Desert Storm. It stated that the industrial base could not support the demands of one major regional conflict, much less two simultaneously. However, the task force study assumed that the major regional conflicts would last 180 days, much longer than DOD’s projected 60-120 days. The private studies’ conclusions about the industrial base differed from DOD’s conclusions largely because of differences in the studies’ methodologies and underlying assumptions. For example, the Munitions Industrial Base Task Force study used three scenarios to compute ammunition requirements: a global war, two major regional conflicts, and operations other than war. In contrast, DOD’s ammunition requirements were established based on two major regional conflicts. Also, the private studies used information for 2 years, the budget year and the out-year, while DOD’s studies took into account planned expenditures over its entire 5-year POM. DOD reviewed a draft of this report and provided written comments that concurred with the report. Some minor technical comments were received earlier and incorporated into the final report. DOD’s comments are reprinted in appendix I. To determine the current status of the ammunition industrial base, we examined statistics the Army, as the single manager, had gathered and met with Army industrial readiness officials. Specifically, we reviewed industrial base trend data concerning the number of production facilities, the public/private mix of facilities, and the capacity of the production facilities. To determine the industrial base’s ability to meet current peacetime ammunition requirements, we met first with military officials to determine how requirements are established. Next, we obtained requirements data and stockpile levels and determined which items had shortages and which items had overages. (We relied on the data supplied by the services and did not physically verify the ammunition stockpile levels or trace requirements data back to the systems that generated the requirements.) Then, we randomly selected 152 ammunition items that had shortages and discussed these items with ammunition officials from the services. We also asked them to identify any additional items that had shortages due to industrial base problems. Finally, we investigated the causes of the industrial base shortages and the Army’s plans to address these shortages, as the single manager for conventional ammunition. To determine whether the industrial base could respond as required, after one or more major regional conflicts, we reviewed (1) the current Defense Planning Guidance, (2) the Army’s 1992 strategy to maintain adequate ammunition facilities into the 21st century and an independent assessment of that strategy, (3) DOD’s 1994 and 1995 financial viability assessments, and (4) reports from industry officials and other non-DOD sources that addressed the industrial base’s ability to provide adequate ammunition during a national emergency. We identified the differences in underlying assumptions that caused wide differences in the reports’ conclusions. DOD’s Defense Planning Guidance contains several assumptions that are open to question. However, since that guidance establishes the framework for all military actions, not just ammunition procurements, we used those assumptions in forming our conclusions. We conducted our review from July 1995 to March 1996 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Secretaries of Defense and each of the military services; the Commanding General, Army Materiel Command; the Commanding General, Army Industrial Operations Command; and other interested parties. We will also make copies available to others upon request. Please contact me at (202) 512-5140 if you or your staffs have any questions concerning this report. Major contributors to this report are listed in appendix II. Antanas Sabaliauskas, Evaluator-in-Charge David A. Bothe, Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Department of Defense's (DOD) ability to meet peacetime ammunition requirements, and to replenish the ammunition stockpile following two major regional conflicts. GAO found that: (1) according to DOD, the ammunition stockpile has no major shortages due to the industrial base; (2) there is no longer a requirement to surge the industrial base during conflicts; (3) the most lethal, preferred munitions will be at a premium, and some requisitions will be filled with older, substitute munitions, but DOD considers these items adequate to defeat the expected threat; (4) DOD is confident in the results of its financial viability studies of firms comprising the ammunition industrial base, even though it did not receive sufficient data to evaluate the financial condition of all firms in the industrial base; (5) changes to DOD assumptions could cause the DOD industrial base assessment to change even if production capacity remains stable; and (6) private studies that have concluded that the industrial base is inadequate to meet replenishment requirements during and following a national emergency are based on underlying assumptions that differ considerably from DOD assumptions.
SB/SE is one of IRS’s four business operating divisions. SB/SE is responsible for enforcement, taxpayer education, and account services for about 45 million taxpayers, including 33 million self-employed taxpayers and 7 million small businesses with assets of less than $10 million. SB/SE also performs some collection functions for other IRS operating divisions. SB/SE managers told us that the reorganization of IRS in 2000—including the creation of SB/SE—presented an opportunity for them to examine enforcement-related processes from a new perspective. Prior to this, the agency was organized around functional and geographic lines, with separate groups responsible for activities such as processing returns, audits, and collection in particular areas. The reorganization eliminated or substantially modified this national, regional, and district structure and established organizational units serving particular groups of taxpayers with similar needs. Officials told us that with the reorganization, they were now responsible for functions that they had not controlled so directly before. They said that there was general agreement among the managers of the newly created division that there were opportunities to make processes more efficient and effective, and that this led them to start several enforcement process improvement projects. They also distinguished between enforcement process improvement projects, which are generally incremental in their approach, and more far-reaching efforts to modernize IRS and transform processes through business systems modernization and other significant changes. We noted in our recent Performance and Accountability Series that IRS has made important progress in these larger efforts but its transformation continues to be a work in progress. Though many of the SB/SE projects include the word “reengineering” in their titles, SB/SE managers agreed that process improvement projects was a better description, given the scope of the changes these projects were making. As described in GAO’s Business Process Reengineering Assessment Guide, reengineering entails fundamentally rethinking how an organization’s work should be done while process improvement efforts focus on functional or incremental improvements. SB/SE managers explained that they purposefully avoided technology-driven changes of the sort under development in the IRS-wide business systems modernization effort. They said that their goal was to make shorter term, more SB/SE- focused changes in the meantime, while the more sweeping changes, and their longer planning and implementation horizons, were still years away from completion. In this report, we refer to the 15 SB/SE efforts under way as of November 2003 as “process improvement projects.” We have reported on declining enforcement trends, finding in 2002 that there were large and pervasive declines in six of eight major compliance and collection programs we reviewed, with the only exceptions in returns processing and in the automated underreporter program. In addition to these declines, we reported on the large and growing gap between collection workload and collection work completed and the resultant increase in the number of cases where IRS has had to defer collection action on delinquent accounts. In 2003, we reported on the declining percentage of individual income tax returns that IRS was able to examine each year, with this rate falling from .92 percent to .57 percent between 1993 and 2002. We also reported on enforcement productivity measured by cases closed per full-time equivalent employees, finding that IRS’s telephone and field collection productivity declined by about 25 percent from 1996-2001 and productivity in IRS’s three audit programs—individual, corporate, and other audit—declined by 31 to 48 percent. Improving productivity by changing processes is a strategy SB/SE is using to address these declining trends. As of November 2003, SB/SE had 15 ongoing process improvement projects under way, most of them in three broad enforcement areas—audit, collection, and compliance support. Audit projects entail changes to field and office examination processes. Collection projects include changes to automated collection programs, field collections, and other programs. Compliance support is the term SB/SE uses to describe processing functions related to audit and collection such as updating IRS information systems for the results of enforcement work and preparing examination closing letters and liens on taxpayer property. Compliance support projects include changes to technical services and case processing. We selected four SB/SE process improvement projects to review in detail for this report. Field Examination Reengineering includes changes to preaudit processes to better identify specific issues on tax returns for auditors to focus on, among other changes intended to improve examination efficiency and reduce taxpayer burden. The Compliance Support Case Processing Redesign project seeks to centralize data entry into online information systems that monitor the status of active audit and collection cases and their results from many different locations with widely variable workload to just a few with more consistent, predictable workload. The Collection Taxpayer Delinquent Account Support Project involves the development of two computer models to improve setting priorities for collections cases to assign to collections staff. The Collection Field Function Consultative Initiative seeks to improve timeliness on collections cases through regular managerial involvement as cases are being worked. Brief descriptions of all of SB/SE’s projects can be found in appendix IV. SB/SE process improvement project teams completed most of the steps we identified as key to SB/SE’s process improvement project planning, but none of the projects we reviewed completed all of the key steps. Guidance on project planning steps, such as our 20-step framework, could help ensure that key steps are followed more consistently. Also, SB/SE enforcement productivity data presented problems in that the data available to SB/SE managers to assess the productivity of their enforcement activities, identify processes that need improvement, and assess the success of their process improvement efforts are only partially adjusted for complexity and quality of cases worked. The planning for each of the four projects we reviewed included most of the key steps in our process improvement framework, but none of the projects included all of the steps. Figure 2 presents our findings, organized by project stages, for each of the four projects we studied. A full circle means a step was fully completed in project planning and a partial circle means that only part of a step was completed. Our basis for each “no” or “partial” finding is explained in appendix III. Following figure 2, we discuss our findings in more detail with selected examples from the four projects we reviewed. The four SB/SE projects we reviewed largely included the productivity baseline definition and process mapping steps under the “Decision to Change” stage, where SB/SE had to determine whether any of its processes should be improved. The Field Examination Reengineering project team and both collection project teams had baseline data showing that the time needed to complete casework was rising and all four project teams had extensive flowcharts mapping the details of current processes. By helping managers understand the strengths and weaknesses of current processes, such information contributes to more informed decisions about which processes to change. However, SB/SE did not as consistently include the complexity and quality of work being done in productivity baselines, compare productivity data to external benchmarks, identify root causes of productivity declines, or measure the gap between current and desired productivity. Weaknesses in these steps leave SB/SE managers without information that could be useful when making decisions about which processes to change. For example, on three of the four projects, productivity data were not adjusted for case complexity and only partially adjusted for quality. This could cause productivity trends to be misinterpreted, leaving SB/SE at risk of trying to redesign processes that are already working well or missing opportunities to fix processes with potential for improvement. Because GAO’s Business Process Reengineering Assessment Guide and our roundtable participants stressed the importance of complete productivity data and because this was a recurring issue we identified in our assessment of the four SB/SE projects, we discuss the importance of adjusting for case complexity and quality when measuring productivity in more detail in the next section of this report. Another example of not consistently following our key steps in the “Decision to Change” stage is found in the Field Examination Reengineering project. The project team sought the advice of many large, noted organizations to benchmark its productivity. However, the work did not lead to measuring how SB/SE’s productivity compared to others’ because the team did not believe that operations in other organizations were comparable. Without this benchmarking, the team did not know whether and by how much it could improve productivity by updating operations based on the experiences of other large organizations. Both GAO’s Business Process Reengineering Assessment Guide and our roundtable participants stressed that although processes may seem unique to government, they likely have counterparts at a process level in the private sector. Moreover, GAO’s Guide says that looking at dissimilar organizations can actually lead to the most fruitful improvements because it stimulates thinking about new approaches. During the “Target Process Development” stage, the projects we reviewed consistently included the steps that prepare for implementation. Planning on all four of the projects we studied included obtaining executive support, assessing barriers to implementing changed processes, and assessing resource needs and availability. The Compliance Support Case Processing Redesign team, for example, originally identified the need for a computer programming change to implement part of their process redesign. When the programming change could not be made immediately, they continued with a manual process in order to keep the project moving forward. However, SB/SE less consistently included key steps in this stage related to designing the new process. For example, in the Collection Taxpayer Delinquent Account Support project, SB/SE did not consider alternatives to achieving the project’s goal of identifying the best cases to assign to collections staff. Because options were not considered, the team ran the risk of missing a more effective approach than the one they took. Another team did not design the new process based on analysis of a gap between current and desired productivity. It is important at this stage for projects to include fact-based performance analysis to assess how to change processes that are in greatest need of improvement in terms of cost, quality, and timeliness. By analyzing the gap between an existing process’s performance and where that performance should be, projects can target those processes that are most in need of improvement, analyze alternatives, and develop and justify implementation plans. Using these steps can increase the likelihood of determining the best new process. During the “Implementation” stage, three of the four projects we reviewed had completed implementation plans and all three included key implementation steps. These steps focus on the challenge of turning project concepts into a workable program. For example, in the Collection Taxpayer Delinquent Account Support project, the team clearly defined who was responsible for updating the existing computer programs to select cases for priority collection action and who was responsible for evaluating the implemented process. We also found that three of the four teams conducted pilot tests and used their results to modify the new processes prior to implementation—steps important for ensuring that process problems are worked out prior to project implementation. SB/SE was less consistent, however, in establishing employee performance expectations for the new processes. In the Field Examination Reengineering project, SB/SE plans to implement changes to audit planning steps in order to streamline audits and reduce demands on taxpayers for additional information. SB/SE’s plan includes monitoring the deployment of the new process using measures such as the percent of personnel trained. However, SB/SE’s plan does not specify performance expectations for employees or how it will measure whether its auditors are using the new techniques properly. Two projects had completed plans for outcome assessments at the time of our review. One of these, the Collection Taxpayer Delinquent Account Support project, included an evaluation plan using available data to develop measures of how accurately the new models were working. The other two projects were in the process of developing evaluation plans—an important step to ensure that the correct data are available and collected once the change is implemented. Three of four initiatives incorporated change management principles throughout their initiatives. In the fourth, we agreed with SB/SE managers that change management key steps were not a factor because the changes to the method of prioritizing collection cases did not affect collections staff. These are key steps because successful process improvement depends on overcoming a natural resistance to change and giving staff the training to implement the changes. The three project teams where change management was a factor consistently completed all of the key steps in the “Change Management” stage. In the course of our discussions with SB/SE managers about the steps that their projects did and did not include, we learned that SB/SE does not have its own guidance or framework that describes the steps to be followed in planning process improvement projects. SB/SE managers said that projects had been planned and carried out without such a framework. Contractors provided substantial assistance in designing SB/SE’s process improvement projects, and managers told us that they relied in large part on the contractor staffs’ expertise and experience in planning the projects. A framework laying out the steps to be followed is an important internal control for projects such as these because it provides top managers assurance that the steps that the organization has determined to be important are either taken on each project or that project managers have explained why they should be omitted. GAO’s Business Process Reengineering Assessment Guide notes that an established framework is important for projects in that it defines in detail the activities the project team needs to complete and alerts the team to key issues that it must address. Without a process improvement framework and a consistent set of steps to follow, IRS runs the risk of future projects also missing key steps. This, in turn, exacerbates the risk of projects not addressing appropriate process problems, developing a less than optimal target process, ineffectively implementing the project, inaccurately assessing project outcomes, or mismanaging the change to the new process. A framework such as the one we developed for this report is an important internal control tool for SB/SE managers to guard against these risks. The internal control is needed whether process improvement is planned by SB/SE staff or contractors. Such a framework may also prove useful in other IRS units besides SB/SE. As with the 20-step framework we used to assess SB/SE’s approach, however, any such guidelines should allow for appropriate managerial discretion in cases where certain steps are not relevant. The data available to SB/SE managers to assess the productivity of their enforcement activities, identify processes that need improvement, and assess the success of their process improvement efforts are only partially adjusted for complexity and quality of cases worked. Productivity measures the efficiency with which resources are used to produce outputs. Specific productivity measures take the form of ratios of outputs to inputs such as cases closed or dollars collected per staff year. The accurate measurement of enforcement productivity requires data about the quantity of outputs produced and inputs used that are accurate and consistent over time and that link the outputs directly to the inputs used to produce them. The accurate measurement of productivity also requires good data on the relative complexity or difficulty of the cases and the quality of the work done by IRS staff. Case complexity can vary with the type of tax (employment vs. income), the type of taxpayer (individual vs. business) and the type and sources of income and expenses. A measure of productivity like cases closed per staff year that shows an increase may not indicate a real gain in efficiency if the mix of cases worked has shifted to less difficult cases or the quality of the work has declined. This problem of adjusting for quality and complexity is not unique to SB/SE process improvement projects—the data available to process improvement project managers are the same data used throughout SB/SE to measure productivity and otherwise manage enforcement operations. SB/SE managers used data on the number of cases completed and the time it takes to complete them to measure output. Such data were usually only partially adjusted for quality and only once were they adjusted for complexity. Opportunities to make more such adjustments were missed. An example of a complete adjustment for complexity is the Compliance Support Case Processing Redesign team’s use of a proxy for complexity. The project illustrates both the shortcomings of SB/SE’s productivity data and the feasibility of some adjustments using other currently available information. The team wanted to measure the work needed to enter examination and collection case data into the information system, holding complexity constant, but direct measures of complexity were not available. While developing their new process, the team knew that more complex cases were to be assigned to higher-grade clerks. The team used the grade of the clerk to adjust output for complexity. Although not a direct measure of relative complexity, the grade level of the clerks provided managers a means to adjust for complexity and better identify performance increases that were due to changes in productivity by holding complexity constant. Such an adjustment increases the credibility of the team’s estimate that IRS would save up to 385 positions from the proposed redesign. SB/SE has systems in place that measure quality against current standards but do not account adequately for changes in standards of quality. The Exam Quality Measurement System (EQMS) and the Collection Quality Measurement System (CQMS) use samples of audit and collection cases, respectively, to determine if IRS standards were followed and compute scores that summarize the quality of the case. Generally, the scoring is done on a numerical scale. For example, EQMS uses quality scores that range on a scale from 0 to 100. To SB/SE’s credit, most of the projects that we reviewed used EQMS and CQMS scores in an attempt to control for quality changes. Unfortunately, these scores may not adequately reflect changes in standards of quality. For example, the IRS Restructuring and Reform Act of 1998 placed additional documentation requirements for certain collection actions on SB/SE collections staff, such as certifications that they had verified that taxes were past due and that sanctions were appropriate given the taxpayers’ circumstances. SB/SE has changed the standards used in EQMS and CQMS to reflect the new requirements but has not changed its quality scale to account for the new, higher level of quality implied by the new standards. As a result, two exams with the same quality scores, one done before passage of the act and one after, may not have the same level of quality. If the way that SB/SE computes its quality scores does not adequately reflect such changes in quality standards, an increase in staff time needed to complete the additional requirement may be misinterpreted as a decline in productivity. Opportunities exist to improve SB/SE’s enforcement productivity data. Statistical methods that are widely used in both the public and private sectors can be used to adjust SB/SE productivity measures for quality and complexity. In particular, by using these methods, managers can distinguish productivity changes that represent real efficiency gains or losses from those that are due to changes in quality standards. These methods could be implemented using data currently available at SB/SE. The cost of implementation would be chiefly the staff time required to adapt the statistical models to SB/SE. Although the computations are complex, the methods can be implemented using existing software. We currently have under way a separate study that illustrates how these methods can be used to create better productivity measures at IRS. We plan to report the results of that study later in 2004. We recognize that better incorporating the complexity and quality of enforcement cases in enforcement productivity data could entail costs to SB/SE. Collecting additional data on complexity and quality may require long-term planning and investment of additional resources. However, as discussed in the previous paragraph, there are options available now to mitigate such costs. Existing statistical methods could be used in the short term, with currently available data on case complexity and quality to improve productivity measurement. In addition, IRS’s ongoing business systems modernization effort may provide additional opportunities for collecting data. Our roundtable participants stressed the benefits of productivity analysis. They said that an inadequate understanding of productivity makes it harder to distinguish processes with a potential for improvement from those without such potential. GAO’s Business Process Reengineering Assessment Guide also highlighted the importance of being able to identify processes that are in greatest need of improvement. SB/SE deserves recognition for embracing process improvement and for including so many key steps in planning the projects. To the extent that IRS succeeds in improving enforcement productivity through these projects, resources will be made available for providing additional services to taxpayers and addressing the declines in tax enforcement programs. While the SB/SE projects we reviewed included most of the key steps in our framework, putting guidance in place for future projects to follow would help ensure that all key steps are included and improve project planning. The 20-step framework that we developed for this report is an example of such guidance. More complete productivity data—input and output measures adjusted for the complexity and quality of cases worked—would give SB/SE managers a more informed basis for decisions on how to improve processes. We recognize that better productivity will mean additional costs for SB/SE and that, therefore, SB/SE will have to weigh these costs against the benefits of better data. GAO currently has under way a separate study, illustrating how data on complexity and quality could be combined with output and input data to create better productivity measures. This may prove useful to SB/SE managers as they evaluate the current state of their productivity measures. We will report the results of that review later in 2004. We recommend that the Commissioner of Internal Revenue ensure that SB/SE take the following two actions: Put in place a framework to guide planning of future SB/SE process improvement projects. The framework that GAO developed for this report is an example of such a framework. Invest in enforcement productivity data that better adjust for complexity and quality, taking into consideration the costs and benefits of doing so. The Commissioner of Internal Revenue provided written comments on a draft of this report in a January 14, 2004, letter, which is reprinted in appendix V. The Commissioner agreed with our recommendation that IRS develop a framework to guide future improvement projects. He notes that SB/SE used outside experts to help direct the projects we discuss in our report, and how the expertise gained from SB/SE’s projects puts the organization in a position to create a framework for future projects. In regard to our second recommendation, the Commissioner agreed in principle with the value of adding to current enforcement productivity data, but also expressed concerns about cost and feasibility. His letter also discusses initiatives in progress to improve program management and monitoring in the short term, as well as his intent to explore the use of statistical methods to improve enforcement program productivity measurement and to ensure that they are included in modernization projects. The careful consideration of costs and benefits and steps to improve measures in the long term are at the heart of our recommendation and we encourage his ongoing commitment to these efforts. The Commissioner’s letter also notes that employee performance goals— one of the steps in our framework—must not violate legal restrictions on the use of certain enforcement data to evaluate employee performance. We agree and clarified language in our report to make it clear that our framework step concerns employee performance expectations, not using enforcement data to evaluate employees or otherwise imposing production goals or quotas. In addition to commenting on our recommendations, IRS provided supplemental data on the results of some reengineering projects. Reviewing project results was not part of the scope of our review and time did not permit us to verify the supplemental data provided by IRS on project results. We conducted our work from September 2002 through November 2003 in accordance with generally accepted government auditing standards. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its date. At that time, we will send copies of this report to the Secretary of the Treasury, The Commissioner of Internal Revenue, and other interested parties. This report is available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions, please contact me at (202) 512- 9110 or David Lewis, Assistant Director, at (202) 512-7176. We can also be reached by e-mail at [email protected] or [email protected], respectively. Key contributors to this assignment were Tara Carter, Kevin Daly, Leon Green, Landis Lindsey, and Amy Rosewarne. The 20-step process improvement we identified is broken out into four broad stages, from deciding to make changes to assessing the results of the changes. A fifth stage deals with managing the changes being made and takes place throughout the latter part of the project. Figure 3 places the stages in their chronological order, with the change management stage shown taking place simultaneously with other stages. Within each of the stages of this framework are specific key steps that we developed based on GAO guidance and what we learned from managers in other organizations about the steps they took to ensure that they were embarking on the right projects, designing and implementing the projects appropriately, and accurately assessing their projects’ results. The sections below describe the nature and purpose of the key steps that fall under the different stages. We recognize that some steps may not be appropriate for some projects and that managers need to apply judgment in using this or any other process improvement framework. Development of this framework is described in appendix II. Organizations that base the decision to redesign processes on accurate productivity data and a clear understanding of current processes increase the likelihood that a project will avoid misdiagnosing a problem or designing a less than optimal outcome target. Six key steps are important to accomplishing this. Baseline data are information on the current process that provide the metrics against which to compare improvements and use in benchmarking. Productivity measures the efficiency with which an organization uses resources, or inputs, to produce outputs. Specific productivity measures generally take the form of a ratio of outputs to inputs. By establishing a baseline using such measures, a process improvement can be measured in terms of real efficiency gains. For example, the baseline could be used to measure the effect of a successful process improvement as additional output produced by the organization with the same or fewer inputs. Productivity measures may give misleading information if they do not account for the relative complexity and quality of outputs and inputs. A measure of productivity like cases closed per staff year that shows an increase may not indicate a real gain in efficiency if the mix of cases has shifted to less difficult cases or the quality of the work has declined. Besides accounting for complexity and quality, the organization must also choose the appropriate indicators of its outputs and inputs and measure them accurately. Organizations like IRS that are service providers have outputs that often consist of complex, interrelated activities that, in many cases, may require multiple indicators of outputs to accurately measure productivity. The specific data needed depend on the characteristics of particular IRS processes. For example, the number and type of output indicators appropriate for services that have direct contact with taxpayers, such as audits, may be larger and more varied (to reflect the full impact of these services on taxpayers) than those appropriate for other services with less or no direct contact, such as forms processing. However, factors like complexity and quality are necessary for accurate productivity measurement for any process in IRS, regardless of how the specific quantitative measures are defined. A benchmark is a measurement or standard that serves as a point of reference by which process performance is measured. During the “Decision to Change” stage, benchmarking is the solution-building component of process improvement through which an organization compares data used to measure existing internal processes with external data on similar processes in other organizations, or in other components of the same organization, to identify process improvement needs and outcome targets. Through benchmarking an organization is able to identify gaps between an organization’s process performance and that of other organizations or other components of the same organization. Benchmarking is a key tool for performance improvement because it provides “real world” models and reference points for setting ambitious improvement goals. Process mapping is a graphical representation depicting the inputs, outputs, constraints, responsibilities, and interdependencies of the core processes of an organization. Acceptable modeling tools and other analysis techniques include flowcharting, tree diagrams, fishbone diagrams, and business activity maps. It is important that a process map defines what the components of each process are, as well as the process’s boundaries, dependencies, and interconnections with other processes. If initial process mapping is done at a high level, more detailed modeling is necessary to identify all of the current process’s activities and tasks, staff roles and responsibilities, and links and dependencies with other processes, customers, and suppliers. Performance data (e.g., costs, time, throughput) for all activities and tasks should be included on the map, or made available elsewhere. The people who actually do the work as well as the process owner should validate the mapping. Regulations, policies, laws, and assumptions underlying the processes should be identified. Causal factors are the conditions that initiate the occurrence of an undesired activity or state. Causal factors that are within the span of control of an organization should be addressed during the Target Process Development stage. Causal factors that are beyond the span of control of an organization should be isolated when identifying a problem. Examples of causal factors are legal requirements, mix of inputs, quality of inputs, and staff constraints. An empirical basis for the decision to make a process change is an important step leading towards an improvement that is optimal and attainable. An empirical basis can be established by using productivity data to define the gap between where the organization currently is and where it wants to be. After deciding to undergo a process improvement, an organization can increase the likelihood of determining the best new process by using productivity data, assessing implementation barriers, and developing feasible alternatives. Solutions can be adapted from best practices found in other organizations. Best practices are processes and procedures that high-performing organizations use to achieve results. An organization should evaluate the pros and cons of each best practice and if possible, apply its own productivity standards. Ideally, this form of benchmarking should be done with an external organization. Many processes that seem unique to the government actually have counterparts in the private sector, especially in generic areas such as claims processing, loan management, inventory management, etc. Also, it is important to note that the other organizations do not have to be particularly similar, or even do similar work. For example, Xerox used L.L. Bean to improve order fulfillment. Looking at processes in dissimilar organizations can actually lead to the most fruitful improvements because it stimulates new thinking about traditional approaches to doing work. Alternatives are different process designs that would likely result in the same or a similar outcome. An organization’s analysis of alternative processes should consider benefits, costs, and risks. Performance results that each alternative could be expected to achieve should be determined. This can be done using methods such as prototyping, limited pilot testing, and modeling and/or computer simulation. In addition to performance, alternatives can be scored by any number of factors including, feasibility, budget, political appeal, implementation time, payback time, and risk. The team should explore each alternative thoroughly enough to convincingly demonstrate its potential to achieve the desired performance goals and fully describe the types of technical and organization changes necessary to support each goal, and if possible, test key assumptions. The selection of a target process from among alternatives needs an empirical basis in the form of some sort of quantitative analysis. The decision to improve and forming the target process should be linked by an analysis of productivity data that shows how the new process can close the gap between the productivity baseline and the desired outcome. Executive support should come in the form of an executive steering committee—a group headed by the organization’s leader to support and oversee the process improvement effort from start to finish. Executive involvement is important because they are in a position to build credible support among customers and stakeholders, mobilize the talent and resources for a reengineering project, and authorize the actions necessary to change agencywide operations. An executive steering committee’s roles include defining the scope of the improvement project, allotting resources, ensuring that project goals align with the agency’s strategic goals and objectives, integrating the project with other improvement efforts, monitoring the project’s progress, and approving the reengineering team’s recommendations. While carrying out these responsibilities the steering committee must also keep stakeholders apprised of the reengineering team’s efforts. Implementation barriers are obstacles that the organization will need to overcome to implement a new process. Examples of implementation barriers include political issues, entrenched workplace attitudes or values, an insufficient number of employees with the skills required for the redesigned roles, collective bargaining agreements, incompatible organization or physical infrastructure, current laws and regulations, and funding constraints. The impact of these barriers and the costs of addressing them (such as staff training, hiring, and relocation) need to be factored into the process selection decision. If the reengineering team determines that the risks and costs of implementing a preferred new process appear too great, they may need to pursue one of the less ideal, but more feasible alternatives. Prior to taking on a process improvement project, GAO guidance and the other organizations we consulted stress the importance of ensuring the availability of staff and other resources necessary to complete design and implementation of the changed process. Without adequate resources, an organization undertaking a change runs the risk of an incompletely implemented project. A carefully designed process improvement project needs a similarly well thought-out implementation in order to be successful. Pilot tests are trial runs of the redesigned process. Pilot testing is a tool used to move the organization successfully to full implementation. Pilot testing allows the organization to (1) evaluate the soundness of the proposed process in actual practice, (2) identify and correct problems with the new design, and (3) refine performance measures. Also, successful pilot testing will help strengthen support for full-scale implementation from employees and stakeholders. Postpilot adjustments are corrective actions taken to correct trouble spots prior to full implementation. Trouble spots can be pinpointed through the formal evaluation of pilot projects designed to determine the efficiency and effectiveness of the new process. Process owners are the individuals with the responsibility for the process being improved. Designating process owners is necessary to ensure accountability. New employee and/or team performance expectations should be established to account for changes in roles and career expectations caused by the new process. Measurable indicators that are currently being used to track and assess employee or team progress should be analyzed to determine if adjustments will be required after the new process is implemented. In the case of IRS enforcement activities, the agency must ensure that the expectations do not violate the legal prohibition on using tax enforcement results to evaluate employee performance or imposing or suggesting production quotas or goals. In 2002, we reported on IRS’s progress towards improving its performance management system; these changes were brought on, in part, by this requirement. Careful assessment of the results of a process improvement project is important in that it may lead to further changes in the process being addressed and may suggest lessons for other projects. An evaluation plan is a way to collect and analyze data in order to determine how well a process is meeting its performance goals and whether further improvements are needed. Good performance measures generally include a mix of outcome, output, and efficiency measures. Outcome measures assess whether the process has actually achieved the intended results, such as an increase in the number of claims processed. Efficiency measures evaluate such things as the cost of the process and the time it takes to deliver the output of the process (a product or service) to the customer. The data needed to conduct outcome assessments later on need to be identified during project planning to ensure that they are available and collected once implementation begins. Change management focuses on the adjustments that occur in the culture of an organization as a result of a redesigned process. Research suggests that the failure to adequately address—and often even consider—a wide variety of people and cultural issues is at the heart of unsuccessful organizational transformations. Similarly for process improvement efforts, redesigning a process is not only the technical or operational aspect of change, but also overcoming a natural resistance to change. Successfully managing change reduces the risk that improvement efforts will fail due to a natural resistance to change within an organization. An organization needs to establish a change management strategy that addresses cultural changes, builds consensus among customers and stakeholders, and communicates the planning, testing, and implementation of all aspects of the transition to the new process. Change management activities focus on (1) defining and instilling new values, attitudes, norms, and behaviors within an organization that support new ways of doing work and overcome resistance to change, (2) building consensus among customers and stakeholders on specific changes designed to better meet their needs, and (3) planning, testing, and implementing all aspects of the transition from one organization structure or process to another. Executive involvement is important for successful change management. Executive support helps strengthen upper management’s support for the project and serves to reinforce the organization’s commitment to the proposed changes. In a roundtable meeting held by GAO to obtain the perspectives of the private sector, one organization mentioned that providing continuous feedback to its employees is a critical element of a change management program. They also described the importance of consistently updating those employees who would be directly affected by a change initiative. Keeping employees informed of decisions and recognizing their contributions are important elements of developing positive employee attitudes toward implementing process improvement initiatives. Ongoing communication about the goals and progress of the reengineering effort is crucial, since negative perceptions could be formed and harden at an early stage, making the implementation of the new process more difficult to achieve. If change management is delayed it will be difficult to build support and momentum among the staff for implementing the new process, however good it might be. Establish a Transition Team A transition team is a group of people tasked with managing the implementation phase of process improvement projects. A transition team should include the project sponsor, the process owner, members of the process improvement project team, and key executives, managers, and staff from the areas directly affected by changeover from the old process to the new. Agency executives and the transition team should develop a detailed implementation plan that lays out the road to the new process, including a fully executable communication plan. The process owners responsible for managing the project will not effectively convey the goals and implementation strategy of the project if a viable mechanism is not set up by the transition team to keep employees and external stakeholders informed. Training and redeploying the workforce is often a major challenge and generally requires substantial preparation time. When a process is redesigned and new information systems are introduced, many of the tasks workers perform are radically changed or redistributed. Some positions may be eliminated or cut back, while others are created or modified. Workers may need to handle a broader range of responsibilities, rely less on direct supervision, and develop new skills. We began development of a process improvement framework by reviewing previously developed GAO guidance related to business process reengineering. We also reviewed guidance that GAO has recently issued on assessment frameworks for other major management areas. GAO’s Business Process Reengineering Assessment Guide recognizes that the steps for planning process improvement need to be adapted for the magnitude of the projects and the particular circumstances of an organization. To supplement the GAO business process reengineering guidance, we held a half-day roundtable meeting with the Private Sector Council and two of its member companies, Northrop Grumman (a $25 billion defense enterprise) and CNF (a $5 billion transportation services company). We also discussed process improvement planning with public sector managers with experience in revamping complex processes. Reviewing publicly available information and in discussions with SB/SE staff, we found that the tax agencies in the states of California, Minnesota, and Florida had gone through substantial process improvement efforts in recent years. Similarly, the Department of the Interior’s Minerals Management Service had carried out substantial process improvement projects. We interviewed officials from these organizations and reviewed documents that they provided. We then used all of this information to adapt GAO’s guidance to a 20-step framework appropriate to the SB/SE projects. We judgmentally selected 4 projects to study in detail from the 15 projects SB/SE had under way. Our goal in selecting projects for detailed review was to cover at least one project in each of the three main enforcement areas that IRS was revamping— audit, collection, and compliance support. We also looked for projects that were sufficiently far along that we considered it reasonable to expect to see either completed steps or plans for remaining steps for most of the project. We selected one project each in the audit and compliance support areas. We found that there were 2 projects underway in the collections area that were significantly far along, so we selected both of them for our detailed review. For the four projects we selected, we used the documentation previously provided to us to identify evidence that SB/SE managers had taken or were in the process of taking the key process improvement project steps we identified. We then discussed our initial findings with IRS officials responsible for the four projects and they provided additional evidence, both orally and in writing, concerning the elements we had identified as present or not in our initial document review. We then revised our initial assessments based on the additional evidence that the officials provided. Our assessments also included review by a GAO project design specialist, in addition to our usual quality control procedures. We also recognized the need for flexibility in the application of our criteria, in that not all of the steps we identified necessarily make sense for every project. Where a particular step did not logically apply to a particular project, we listed it as “not applicable” in our assessment. For instance, the Collection Taxpayer Delinquent Account Support project we reviewed in detail did not change processes that staff were asked to carry out, so we rated the step about developing a training plan as “not applicable.” Where a step was not fully completed but the project team did a number of elements of the step, we assessed that step as “partial” in our matrix. We did not evaluate the success so far or the likelihood of success for any of the projects we reviewed. We also did not evaluate the effectiveness with which project steps were completed. For example, we did not evaluate the quality of the pilot tests. To determine the usefulness of IRS productivity data as a basis for determining the direction and eventual success of SB/SE process improvement efforts, we reviewed the literature on productivity measurement in tax agencies and in the public sector generally. We also reviewed studies on productivity measurement in service industries with functions similar to IRS. The following four figures provide summaries of the evidence we used to make specific assessments of four selected SB/SE process improvement projects. SB/SE management capitalized on the opportunity presented by the IRS reorganization that created their operating division and saw declining productivity trends as an impetus to change. SB/SE had 15 distinct process improvement efforts under way as of November 2003, many with multiple subprojects. Table 1 provides descriptive information of the 15 projects. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
In recent years, the Internal Revenue Service (IRS) has experienced declines in most of its enforcement programs, including declines in audits and in efforts to collect delinquent taxes. Increasing enforcement productivity is one strategy that can help reverse these declines. To this end, IRS is currently planning and has begun implementing enforcement process improvement projects. GAO was asked to assess the extent to which the planning for the projects followed steps consistent with both published GAO guidance and the experiences of private sector and government organizations. Specifically, GAO assessed the extent to which four judgmentally selected projects followed the 20 planning steps. Planning for the four enforcement process improvement projects GAO reviewed included most of the 20-step framework developed to assess the projects. This increases the likelihood that projects target the right processes for improvement, choose the best target process from among alternatives, effectively implement the project, accurately assess project outcomes, and properly manage the change to the new process. However, none of the projects completed all of the steps. For example, some projects did not fully identify the causes of productivity shortfalls, leaving a risk that the project did not fix the right problem. In the course of this work, GAO found that IRS managers do not have guidance about the steps to follow in planning process improvement projects, increasing the possibility of omitting steps. A recurring issue in the four projects was that IRS's enforcement data only partially adjust for the complexity and quality of cases worked. This issue is also a problem for IRS enforcement productivity data generally. Failing to adjust for both complexity and quality increases the risk that trends in productivity will be misunderstood. For example, a decline in the number of cases closed per employee at the same time that case complexity is increasing may not be a real decline in productivity. GAO recognizes that some options for improving productivity data could be costly. However, costs could be mitigated by using existing statistical methods and IRS complexity and quality data.
Under the Postal Reorganization Act of 1970 (the 1970 Act), the Postal Service is an independent establishment in the executive branch that began operations on July 1, 1971. The Postmaster General, Deputy Postmaster General, and the nine presidentially appointed members of the Postal Board of Governors direct the operations of the Postal Service. The 1970 Act set a number of goals, objectives, and restraints for the Postal Service. The Postal Service is to operate in a businesslike manner and is to break even in the long term. Unlike its competitors who can select the markets they serve, the Postal Service by statute must provide universal service to all urban, suburban, and rural customers at uniform and reasonable rates. To regulate the Postal Service’s adherence to ratemaking standards and to ensure that it does not take advantage of its monopoly—granted through the Private Express Statutes—on the delivery of letter mail, the 1970 Act established the Postal Rate Commission as an independent establishment of the executive branch. The 1970 Act requires the Postal Service to file with the Commission a request for changes in rates for all services offered. As part of its request, the Postal Service provides detailed information and data explaining revenue requirements, mail-volume estimates, costing, pricing, and rate design. The Commission must hold public hearings and allow interested parties, including Postal Service competitors, the opportunity to make their views on proposed rate changes known. The Commission is required to provide the Postal Service’s governors with its recommended decision on new rates within 10 months of the filing. In making its decision, the Commission is required to take into account the nine criteria (see app.I) specified in the 1970 Act. The ratemaking criteria set forth in the 1970 Act were established during a period when the Postal Service had less competition than it does now. The Postal Service now operates in a different environment because of increasing competition from private companies and advances in electronic communications. In 1992, we reported that Congress should reexamine the nine criteria set forth in the 1970 Act and consider amending them to state, among other things, that in allocating institutional costs, demand factors are to be given a weight that takes into account the need to maintain the long-term viability of the Postal Service as a nationwide full-service provider of postal services, and to determine whether these criteria are still valid in light of changing marketplace realities. Since the late 1970s, the Postal Service and the Commission have disagreed over the extent to which the ratemaking criteria allow the use of demand factors to allocate the Postal Service’s overhead burden among the various mail classes. The Postal Service believes that demand factors should play a major role in overhead cost allocation in determining prices for various mail classes to recognize market realities, whereas the Commission has in the past placed less weight on demand factors in its pricing decisions than the Postal Service has. This report focuses on this issue as well as volume discounting. In preparing this report, we reviewed the Commission’s rate decision for 1994 (Docket No. R94-1), the Postal Service’s testimony supporting the 1994 rate case, expert testimony given on demand pricing, technical papers on postal pricing policies, and our past work. We also discussed the 1994 rate case with Postal Service and Commission officials. In addition, we reviewed reports that recommended reforms to the ratemaking process, in particular, reports by the Institute of Public Administration and the Joint Task Force on Postal Ratemaking. We received written comments on a draft of this report from the Postal Service and the Postal Rate Commission. We discuss these comments at the end of this report. Copies of the comments are located in appendixes II and III. We did our work in Washington, D.C., between March and May 1995 in accordance with generally accepted government auditing standards. In our March 1992 report, we said that to better compete in the current market, the Postal Service needs more flexibility in setting postal rates and that these rates should be based to a greater extent on economic principles. Therefore, we suggested that Congress should reexamine the 1970 Act to (1) determine if volume discounting by the Postal Service would be considered a discriminatory pricing policy and (2) clarify the extent that demand pricing should be considered in postal ratemaking. These pricing mechanisms could help minimize mail volume losses due to competitive forces and help keep rates lower for most mail classes over the long run. The reasons underlying our position follow. Three mail categories are subject to significant direct competition where Postal Service competitors provide discounts to large volume customers: parcel post, Express Mail, and Priority Mail. These three categories accounted for $4.7 billion or 10 percent of total Postal Service 1994 revenues. As we reported in March 1992, the Postal Service lost major market share in the multi-billion dollar parcel post and Express Mail markets. Although other factors contributed, such as the operating costs faced by the Postal Service and quality of service, a key element in this loss was that the Postal Service could not offer competitive prices to large users. The Postal Service’s Priority Mail (second-day) service is its fastest growing service and has a mix of statutorily protected and unprotected material. According to the Postal Service’s Origin-Destination Information System database, about 53 percent of Priority-Mail volume consists of small parcels and packages not subject to the Private Express Statutes or urgent letter regulation. This market is being pursued by competitors of the Postal Service through aggressive pricing strategies and service offerings. In our March 1992 report, we said that if the Postal Service is to be more competitive, it will need greater pricing flexibility in markets exposed to direct and growing competition, including its second-day market, as well as its overnight and parcel post markets. The Postal Service lacks authority to revise rates quickly or grant volume discounts to users of its competitive services. It has proposed volume discounts for Express Mail and certain international services. The Commission did not accept the Express Mail proposals. However, after a federal district court ruled that the Postal Service’s proposed volume discounts for international mail service unreasonably discriminated among mail users and could not be implemented, the United States Court of Appeals for the Third Circuit reversed the District Court’s ruling and upheld the authority of the Postal Service to implement volume discounts. The Postal Service is a multiproduct, regulated enterprise subject to varying degrees of competition in its product lines. Since the late 1970s, there has been a basic disagreement between the Commission and the Postal Service on the extent that the principles of economically efficient pricing or Ramsey pricing can be applied to postal ratemaking. Ramsey pricing has been used in varying degrees as a basis for ratesetting in regulated industries, and its advantages have been analyzed at length in the economic literature. Under Ramsey pricing, an agency that regulates a natural monopoly would set prices so that in each market segment, the percentage markup would be inversely proportional to the elasticity of demand in that segment. For example, available evidence from Postal Service econometric models shows that First-Class Mail is more inelastic than third-class mail. In this situation, use of Ramsey pricing or the inverse elasticity rule would result in allocating a higher-than-average percentage of the institutional costs to First-Class Mail and a lower-than-average percentage to third-class mail. It should be noted that Postal Service estimates included with R94-1 show that demand for 15 selected mail categories is inelastic. In one category, Express Mail, demand is elastic. Under Ramsey pricing, the markups depend on relative elasticities, not whether demand for a particular postal service is elastic or inelastic. As illustrated in R94-1, on the basis of Postal Service elasticity estimates, a 10-percent increase in the First-Class letter rate would result in about a 2-percent loss in volume. Although the Postal Service and the Commission both agree that market factors should play a role in ratemaking, our March 1992 report described the different views and strategies they have in applying these factors in the ratemaking process. Since our report, another omnibus ratemaking proceeding has been completed (Docket No. R94-1). In the 1994 rate case, the Postal Service’s strategy was to keep the rate change process relatively simple and provide enough revenue until it could propose a major rate reclassification. It requested a 10.3-percent increase for most major subclasses, which the Postal Service said was less than the economywide rate of inflation since its March 1990 filing. The Commission did not accept the proposed uniform rate increase, stating that the resulting rates for some classes would not be in accordance with the 1970 Act’s requirement that the Commission recommend rates that are fair and equitable. As in previous rate cases, one disagreement in R94-1 centered on the Postal Service’s proposed allocation of a large portion of the $19.7 billion in total institutional costs to First-Class letters and third-class bulk mail, which together account for 83 percent of postal mail volume and 78 percent of postal mail revenue. These allocations are made as “markups” to the costs that can be attributed to each mail class. The Postal Service proposed to mark up by 81.5 percent the costs attributed to First-Class letters. According to the Commission, this markup would result in First-Class Mail absorbing 77 percent of total institutional costs—an increase of 5 percentage points over the contribution approved in the 1990 rate case. The Commission considered this an excessive burden for First-Class mailers, considering that the costs attributed to First-Class had declined from 60 percent to 58 percent since the 1990 case. In support of its uniform rate proposal, the Postal Service said that the cost allocations proposed in R94-1 for First-Class letters and third-class bulk mail were more in accord with Ramsey pricing principles than were the allocations in recent Commission-recommended decisions. In addition, the Postal Service said that its emphasis on demand factors is consistent with the criteria in 39 U.S.C. 3622(b), in particular, section 3622(b)(2) dealing with the value of the mail service to both the sender and the recipient. The Commission believed that the Postal Service’s proposed allocation of institutional costs to these two major mail categories would be a significant departure from previous rate-case decisions. The Commission’s stated objective in previous rate cases was to have First-Class markups slightly above the systemwide average and third-class markups slightly below the systemwide average. The Commission allows a lower markup for third-class bulk regular mail to reflect its “higher elasticity of demand, the potential for volume diversion to alternative delivery, and the need to set rates which are responsive to the market,” as well as to recognize “the low intrinsic value of its service standards and service performance.” The Commission calculated rate changes necessary to return to the relative markup relationships that were recommended in the 1990 rate case. On the basis of this analysis, the Commission found that the third-class bulk regular rate would require a 17-percent increase rather than the uniform 10.3-percent increase proposed by the Postal Service. In its recommendation to the Board of Governors, however, the Commission limited the third-class bulk regular rate increase to 14 percent. The Commission tempered the rate increase to reflect its concern with the impact a larger rate increase would have on users of this service. As finally recommended, the First-Class letter markup was 131 percent of the systemwide average, and the third-class bulk mail markup was 90 percent of the systemwide average. While the Commission accepted the Postal Service’s proposed 32-cent rate for the First-Class stamp, it recommended a smaller increase than the Postal Service’s proposed rate for postcards and no increase in the extra ounce rate for letters weighing more than 1 ounce. The Postal Service and the Commission also disagreed on the resulting rate increases among the competitive mail categories. For example, the Commission recommended a lower-average rate increase (4.8 percent) for Priority Mail, overall, than the uniform rate increase (approximately 10.3 percent) proposed by the Postal Service, because it believed the rate proposed would place an unfair institutional cost burden on this mail component. Similarly, the Commission recommended a lower rate increase for Express Mail (8.0 percent compared to 10.2 percent) because it had the highest elasticity of any mail class. While it recommended lower rates in two competitive categories, the Commission recommended a higher rate increase (18 percent) for fourth-class parcel post, another highly competitive market, instead of the Postal Service’s proposed 13 percent. The Commission believed that this small mail component should make a higher contribution to institutional costs than that proposed by the Postal Service. As we noted in our March 1992 report, the Postal Service and the Commission do not agree on the extent to which demand factors can be used to price postal products. There appear to be two principal sources of disagreement. First, section 3622 (b) of the 1970 Act specifies nine criteria to be used in setting postal rates. (See app. I.) These criteria set a number of potentially conflicting objectives, and the Postal Service and the Commission disagree on the relative emphasis to be placed on each of them. Second, the implementation of a pricing scheme that includes demand factors crucially depends on the availability and quality of data on economic variables and on the econometric methodology that is used to analyze the data and derive estimates of relative demand elasticities. The Commission has generally been more pessimistic than the Postal Service about whether the current state of the art is sufficiently advanced to permit heavy reliance on demand-based pricing. With regard to these disagreements, we made the following observations in our 1992 report, which we believe are still germane. First, we recognize that existing law requires the Commission to balance multiple objectives in setting the rate structure. For that reason, we do not advocate the application of Ramsey pricing principles to the exclusion of other considerations. However, the pursuit of diverse objectives comes at a price in terms of loss of consumer welfare, as well as possible erosion of the Postal Service’s competitive position in the long run. Further, there is every reason to believe that changes in the economy that have taken place since 1970 have increased the potential cost to the Postal Service and the economy of pursuing diverse objectives. Resolving this situation may require that Congress clarify the ratemaking criteria set forth in the 1970 Act. Second, we are aware of ongoing disagreements among econometricians who have studied technical issues related to demand-based pricing. However, we continue to believe that decisions should be made on the basis of the best information available, and that policymakers should not wait for such controversies to subside before taking action. Postal ratemaking is a complex process that usually takes 10 months—the statutory deadline established by Congress in 1976. This period does not include the time the Postal Service spends preparing a rate case, nor the time it takes for an appeal when the Board of Governors and the Commission do not agree. In the last rate case (R94-1), the Commission issued its recommended decision in less than 9 months. While we do not know how long the process should take, various study groups believe that the current process takes too long for the Postal Service to respond to today’s rapidly changing market conditions. The ratemaking process begins when the Postal Service files a formal request with the Commission for rate changes. The Postal Service provides detailed information and data explaining (1) revenue requirements, (2) mail volume estimates, (3) costing, (4) pricing, and (5) rate design. As required by the 1970 Act, the Commission holds public hearings and allows interested parties the opportunity to make their views known. A typical rate case can involve up to 100 parties, 150 witnesses, and several rounds of hearings lasting many days or weeks. In addition to the Postal Service and an officer of the Commission representing the interests of the general public, the parties and witnesses represent an array of interest groups, including (1) commercial mailers, (2) publishers and publishers’ associations, (3) Postal Service competitors, and (4) Postal Service unions. The most important and time-consuming parts of the proceedings center on the Postal Service data explaining the attribution and assignment of costs to specific services or classes of mail and the rate design based on those data. As long as the core letter mail business—represented largely by First-Class and third-class mail and accounting for about 80 percent of revenues—is protected by the Private Express Statutes, some type of regulatory oversight will be necessary. The President’s Commission on Postal Reorganization (“Kappel Commission”) whose 1968 reportpersuaded Congress to pass the 1970 Act said that “were we to recommend a privately-owned Post Office,” which it did not, “rate regulation by an independent Federal commission would be a necessary and appropriate corollary.” Instead, the Kappel Commission recommended that Congress establish an independent government-owned postal corporation. The Kappel Commission said that it saw no advantages to, and had serious problems in, proposing the regulation of a government corporation by another government body. Over the 25-year period since the 1970 Act, many studies, including four by us, have proposed changes to the postal ratemaking process. The remaining section of this report focuses on proposals for modifying the postal ratemaking process contained in two recent and important studies that were completed in fiscal year 1992. These studies, like our pricing report, focused on ratemaking changes to reflect the competitive environment in which the Postal Service operates. The findings and recommendations in earlier studies are generally revisited in these more recent reports. Because of the contention between Postal Service management and the Commission over the 1990 rate case, the Board of Governors contracted with the Institute of Public Administration in May 1991 to study the ratemaking process. The study examined the process by which prices are set for mail services and assessed the process in terms of timeliness, flexibility, simplicity, and fairness. The Institute’s report to the Board of Governors in October 1991 concluded that the ratemaking process had adversely affected the Postal Service’s ability to serve the public and compete in a changing competitive environment. The study found that the process had become too cumbersome, rigid, and narrow to best serve the overall financial interests of the Postal Service and its customers. The Institute made a number of recommendations that would, among other things, allow the Postal Service more flexibility to compete, as well as an increased ability to protect the system from financial loss. It did not make any specific recommendations for changing the rate criteria. However, it stated that (1) “the full range of factors listed in the Postal Reorganization Act should be used in redefining rate criteria” and (2) the Commission’s use of “historical average” markups to guide ratemaking “is an inappropriate criterion, and not on the list in the Act.” This latter point was consistent with our view in the 1992 report on pricing postal services in a competitive environment. The Institute recommended that the Board of Governors and the Commission establish a joint task force to draft a comprehensive revision of rules governing ratemaking and classification and propose a strategy for reform of the process. Among many other ideas, it also offered several that we believe merit further consideration: (1) base an omnibus rate case on a 4-year financial plan, rather than on a 1-year test period; (2) have the Postal Service and the Commission agree on categories of information to be submitted with the plan, which should become regular products of budgeting and information systems, thus reducing the need for special statistical studies for ratemaking; and (3) permit the Postal Service to compete on “level playing fields” in its competitive markets, while also constantly improving its existing core services by controlling costs and improving efficiency. The Institute also proposed legislative changes that we believe merit consideration as follows: require the Commission to determine which segments of Postal Service proposals are competitive and use expedited review processes for rate changes on these segments, give the Postal Service experimental authority to market-test new products and service enhancements without being subjected to the standard rate and classification procedures of the Commission, change the requirement that unanimous consent of the Board of Governors is needed to reject or modify a Commission-recommended decision to a two-thirds majority requirement, and eliminate the second round of rate-case reconsideration. In response to the Institute’s report, the Postal Service and the Commission established a joint task force to examine the problems of ratemaking and to provide proposals for new procedures that would eliminate some of the structural rigidities. The Commission and the Governors each appointed four members to the task force. The eight-member task force started its work in January 1992 and issued its unanimous report on June 1, 1992. The task force found “a need for more flexibility in pricing by the Postal Service, a need for greater predictability of prices, and a continuing need for greater accountability in postal financial performance.” The task force proposed a number of recommendations, none of which has been implemented. Based on past work on postal ratemaking, our observations on some of the key recommendations follow. First, the task force recommended that postal ratemaking be based on a 4-year, 2-step rate cycle. Under the 4-year cycle, the Commission would recommend rates for the first 2 years of the cycle and project but not recommend rates for the remaining 2 years. A midcycle case proceeding would be held to validate or adjust the earlier proposed rates, but the scope would be limited in that the Commission would not revisit cost attribution methods, volume estimating methods, and pricing policies or other factors affecting assignment of institutional costs. According to the task force, the proposed 4-year process would (1) provide better rate matching to marketplace realities, (2) provide more predictable rate increases in smaller increments, (3) reduce the costs of the ratemaking process, and (4) improve accountability in financial performance. Looking at the Postal Service’s financial and operating needs over a 4-year period, rather than a single year as is currently practiced, was similar to the proposal suggested by the Institute. Among the issues that need further study would be (1) whether the Postal Service can accurately project revenues and expenses for 4 years; (2) how the specific proposal would be implemented; (3) what rules and procedures would need to be changed in the two-stage process; and (4) what would be the views of the Postal Service, mailers, and other interested parties to the proposed rate-case cycle. To address these issues, the Commission issued a Notice of Proposed Rulemaking in August 1992 containing proposed rules on implementing the 2-phase, 4-year rate cycle for omnibus rate proceedings. In its October 13, 1992, comments, the Postal Service disagreed with the proposed rules believing there were more disadvantages than advantages. Basically, the Postal Service wanted a more flexible approach to general rate proceedings and did not want to be locked into a rigid 2-phase, 4-year rate cycle. While the Postal Service did not support the proposed general rate cycle, it encouraged the Commission to formulate procedures to address other recommendations made by the joint task force (see below), which the Postal Service believed would be more responsive to its needs in a competitive environment. Second, the task force suggested changes in how rates are set for mail that directly competes with products offered by the private sector. The three service areas it identified as competitive classes were Express Mail, parcel post, and heavy-weight Priority Mail. The task force recommended that the Commission adopt a “rate band” approach to introduce more flexibility in setting rates for these products. Under this proposal, upper and lower bands for each rate element within the rate category’s rate structure would be recommended by the Commission. Within these bands, the Postal Service would be free to select specific prices after giving appropriate notice to its customers. In establishing the rate bands, the Postal Service and the Commission would ensure that the lower rate band covered attributed costs and made a minimum acceptable contribution to institutional costs to “protect against the possibility of cross-subsidy” from another mail class. Third, a proposal that would recognize market pricing strategies dealt with developing a system of “declining block rates” to create incentives to postal customers to increase usage. This recommendation, if adopted, would allow the Postal Service to offer discounts to large-volume users in its competitive markets. When the Postal Service proposed discounting schemes in past rate cases, a major issue was rate discrimination, as discussed in our March 1992 report. In addition, the task force made a series of recommendations to help the Postal Service experiment with new product lines and changes in service, which currently are subject to lengthy reviews by the Commission and the public. These recommendations include accelerated review procedures for marketing new products and services and multi-year cost recovery for new service introductions. As we previously mentioned, the Postal Service filed a petition with the Commission on April 10, 1995, to obtain more flexibility in ratemaking. In this petition, the Postal Service asked the Commission to consider the recommendations of the task force that the Postal Service had not rejected in October 1992. In addition, the Postal Service has filed a proposal with the Commission to establish a market-based classification schedule that, among other things, restructures First-Class and bulk regular third-class mail. Although we still believe that Congress should consider changes in policies concerning volume discounting and demand pricing, such consideration might be more useful after the outcome of these Postal Service initiatives is known. Furthermore, other changes to the 1970 Act may be required if the Postal Service is to be competitive as discussed in our September 1994 labor-management report. Today, the Postal Service is competing with communication technologies and private carriers for the delivery of services in markets that in 1970 were the sole domain of the Postal Service. Many observers believe the current ratemaking process takes too long for the Postal Service to respond to today’s rapidly changing market conditions. The proposals that we and others have offered—to improve the effectiveness of the postal ratemaking process, ensure financial accountability, and give the Postal Service more flexibility to price and compete in the marketplace—provide the Postal Service, the Postal Rate Commission, and the Subcommittee with a variety of ideas to consider in reforming the ratemaking process. The Postal Service and the Postal Rate Commission provided written comments on a draft of this report. They are located in appendixes II and III. The Postal Service said that Congress should not defer consideration of the issues raised in our 1992 report while the Postal Service initiatives are pending before the Commission. The Postal Service believes that the ratesetting criteria should be clarified by an explicit congressional determination that market demand factors be given substantial weight in pricing postal products. In addition, it believes that Congress should make clear through an amendment to the 1970 Act that appropriate economic factors, such as marginal costs, should be given a relatively large role in establishing an attributable cost threshold for rates. The Commission said that it would welcome a review of the ratesetting criteria in the 1970 Act, and it agreed with our suggestion that Congress should defer this review until the two pending Service initiatives are concluded. The Commission said it would not comment on the merits of volume discounting because this issue is pending before it. Regarding market-based pricing, the Commission said it disagreed with our conclusion in this report and our 1992 report that postal rates should be based to a greater extent on demand-pricing principles. The Commission had several overall criticisms of our report, saying that we produced a report that was not within our proper institutional role, that we failed to address key issues, and that we did not sufficiently understand the economic theory underlying postal ratemaking. We do not believe that these criticisms are warranted. It is important to understand that our objective was to report to Congress on the implications of a greater or lesser reliance on demand pricing for setting postal rates, recognizing the need for balance with the pursuit of other goals. It was not our role or goal to reduce the postal ratesetting process, which is inherently complex, to a single formula or set of formulas that specifies the exact weight to be given to demand factors vis-a-vis other considerations. We have made several changes to this report to clarify this point. Because this was not our objective, we did not present an exhaustive discussion of all the technical aspects of the economics of postal ratemaking. In our 1992 report, we analyzed some of the more important issues as they relate to the application of criteria prescribed in the 1970 Act for ratemaking. The basis for our conclusions that these criteria are matters that require consideration by Congress is spelled out in the objectives, scope, and methodology section of our 1992 report and the scope and methodology section of this report. The Commission (1) summarized what it considered to be the conclusions and recommendations of our 1992 report and this update of that report, (2) stated that our report had a “major error” because it believes the effects of Ramsey pricing on the Postal Service’s rates and long-run finances will be different than we reported, and (3) argued that the effects of demand pricing on the Postal Service’s competitiveness will be different than we reported. The Commission also said that (4) the conditions necessary for Ramsey pricing to achieve efficient consumption patterns are not met, (5) Ramsey pricing would not have a substantial effect on consumption patterns, and (6) disagreements between the Commission and the Postal Service do not necessarily imply that the ratesetting process is intrinsically defective. Below, we respond to each of these positions. The Commission’s summation of our work is inaccurate in certain crucial respects. We never predicted that Ramsey pricing would ultimately lower rates for all classes of mail, as the Commission asserts. Rather, both of our reports said that demand pricing, along with volume discounting, could help keep rates lower for most mail classes over the long term. Further, we do not agree that our conclusions would apply only in “extreme and improbable conditions,” for the reasons given in the following sections. We do not concur with the basis for the Commission’s second point, which deals with an alleged “major error” in this report. The Commission argues that, because the demand for most postal services is relatively inelastic, the effects of demand pricing on the Postal Service’s rates and long-run finances will be different than we reported. The Commission argues that the total institutional-cost contribution from competitive postal services could decrease if their markups were reduced. We agree that this could happen under certain conditions. In particular, it might happen where demand is very inelastic. While the precise magnitudes of future elasticities of demand are unknown, we do not believe that the situation described by the Commission applies to the Postal Service in the long run. Rather, we believe that, if mailers are increasingly offered alternatives and postal rates continue to increase as in the past, the Postal Service will face considerably more competition in some markets. This would likely lead to elasticities of demand that are higher (in absolute terms) than those reported for the Commission’s most recently recommended postage rates (Docket No. R94-1). Further, we question the relevance of the hypothetical example provided in footnote 5 of the Commission’s letter regarding the impact of adjusting the rates on a single mail class. Not only might the demand for various mail classes change significantly in future years, but also it is the relative elasticities that are relevant, not absolute elasticities. The Postal Service is required to operate subject to a break-even constraint. Thus, the task is one of determining the relative markup on different classes in order to achieve a systemwide average markup that just covers institutional costs. This means that the markups on all classes of mail, taken together, should cover institutional cost. When the markup on one class of mail is increased, the markup on one or more other classes of mail must be lowered to maintain a break-even operation overall, all other factors being equal. We continue to believe that demand pricing, along with volume discounting, could help keep rates lower for most mail classes over the long term. Such pricing mechanisms could help minimize mail volume losses due to increasing competition in some postal markets. The extent of the curtailment of volume losses will depend in part on the future demand for the various classes of mail. We do not believe that over the long term, the outcomes that we have indicated are at all improbable in light of past mail volume trends. Our 1992 report discusses how the Postal Service has already lost major market share in parcel post and Express Mail due to competition. One reason for this loss was the application of the current pricing criteria and the resulting limited ability to (1) price postal services with sufficient weight given to market factors, i.e., the relative demands for the various services, and (2) use pricing schemes that are routinely used by the Postal Service’s competitors, e.g., volume discounts. In its third point, the Commission argues that the Postal Service’s competitive position would not be improved by a shift toward Ramsey pricing. The Commission’s arguments emphasize second-class mail, parcel post, Priority Mail, and Express Mail. As we noted in our 1992 report, the principal issue we discuss has been and remains the allocation of institutional cost between First-Class and third-class mail, which together accounted for 93 percent of total mail volume and 84 percent of revenue in fiscal year 1994. We believe that this is where the potential benefits of demand-based pricing will primarily be found. Further, the Commission argues that the Postal Service will be incapable of realizing any contribution to overhead, or what the Commission calls “profit,” in competitive markets over the long term. The Commission’s logic is that competition will drive the rates toward the level of marginal costs, and thus drive the “profit margins” toward zero. We find this argument unpersuasive. If this logic were applied to private carriers, who are subject to similar market forces and presumably also have cost structures involving overhead or fixed costs, it would imply that their “profit margins” would also be driven to zero. However, this is implausible, at least for viable competitors over the long term, because the firms would be operating at a loss. We agree that the Postal Service’s experience in the parcel post delivery market is important, but we disagree with the Commission about the exact nature of the lesson to be learned. As we discussed in our 1992 report and in this report, at the time it was losing parcel delivery business to its competitors, the Service was limited in its ability to use pricing techniques similar to theirs. We recognize, nonetheless, that the use of different pricing techniques alone will not guarantee financial stability. As we have pointed out in this report, unless significant progress is also made in, for example, controlling labor costs and improving labor relations, the Service may still be unable to compete effectively, regardless of ratemaking changes. Finally, the Commission said that our report noted “with approval” the Postal Service proposal for a rate increase. In fact, we merely cited the Service’s view that its proposal regarding cost allocations for First-Class letters and third-class bulk mail were more in line with Ramsey pricing. It was not our purpose to approve or disapprove of any specific proposal. Also, as previously indicated, the debate surrounding cost allocations in prior rate cases has focused primarily on First-Class and third-class mail and not on Priority Mail and Express Mail, which the Commission has chosen to emphasize. In its fourth point, the Commission argues that the conditions necessary for Ramsey pricing to achieve economically efficient consumption patterns are not present. Regarding the Commission’s arguments in this area, we have several observations. First, although monopoly ratepayers may perceive that they are paying a disproportionate share of fixed costs under demand-based pricing schemes, we believe that over the longer term their rates would likely increase less under a demand-based pricing scheme than under other schemes, for the reasons we stated in our 1992 report. This view has been supported by others who have studied postal economics. With regard to the Commission’s argument that Ramsey pricing is viewed by many as unfair to competitors, we repeat that we do not advocate the use of demand-based pricing, and certainly not Ramsey pricing, to the exclusion of all other considerations. To respond to concerns about fairness to competitors, the Commission would be free to use this factor in its deliberations, since it is included in the criteria specified in the 1970 Act. Further, the issue of fairness to competitors involves considerations that go far beyond ratesetting, into such areas as the existence and magnitude of the postal monopoly, and hence are beyond the scope of either our 1992 report or this report. As noted in this report, we are reviewing aspects of the postal monopoly and plan to report on that review later. We agree with the Commission about the importance of measuring costs properly. However, if the inability to measure both stand-alone costs and incremental costs is a problem for Ramsey pricing or other demand-based pricing schemes, it would seem to be equally problematic for other types of pricing schemes. In our 1992 report, we noted the need for better cost and demand data. However, as we noted both in that report and this one, we continue to believe that decisions should be based on the best information available, and that decisions on the continued appropriateness of the rate criteria in the 1970 Act should not be postponed pending improvements in the data. Further, with regard to the Commission’s statement that the Postal Service’s underlying direct costs (e.g., labor costs) are not at a technically efficient level, we note that its operating costs, whether efficient or not, must be taken as a given for ratesetting purposes. Again, to the extent that this is a problem, it is equally problematic for both demand-based pricing and other forms of pricing. We did not address Postal Service workforce issues in this report; however, as noted in the text, we have done so in other recently issued reports on automation and labor-management relations. As a final comment on the Commission’s fourth point regarding the conditions necessary for Ramsey pricing, we note that the assertion that no regulatory body requires rates to conform strictly to Ramsey pricing principles is not relevant to our report. Again, we did not state that demand pricing in general, or Ramsey pricing in particular, should be used for setting postal rates to the exclusion of all other factors. As we noted earlier, Ramsey pricing has received considerable attention in the academic literature, and it has been applied to varying degrees in ratesetting proceedings in regulated industries. The Commission’s fifth point is that a rate structure that is derived from Ramsey pricing formulas would not affect consumption patterns in a way that differs substantially from the impact of the rate structure that the Commission actually adopted in Docket No. R94-1. In fact, we made no estimate of the impact of Ramsey pricing on consumption patterns. We are aware of the estimates that are cited by the Commission. We note that these estimates are based on short-run estimates of demand elasticities and that the long-run scenario may be quite different. Further, the Commission asserts that our draft report criticizes the Commission for its actions. In fact, our purpose was merely to describe the differences in reasoning expressed by the Postal Service and the Commission in their respective applications of the postal ratemaking criteria set forth in the 1970 Act. Regarding the Commission’s observation on the differing views of the Commission and the Postal Service, we agree that regulators often disagree with regulated entities over the prices to be set in a particular case. However, our report addressed the more fundamental issue of whether the criteria established by Congress in 1970 for setting postal rates are still valid today. Critical to addressing this issue is the question of the weight to be assigned to demand factors, relative to other criteria prescribed in the 1970 Act. It is the difference in Postal Service and Commission perspectives regarding this relative weight that is of concern to us and that we believe requires consideration by Congress. We are sending copies of this report to the Board of Governors and Postmaster General of the U.S. Postal Service, the Commissioners of the Postal Rate Commission, and other interested parties. The major contributors to this report are listed in appendix IV. If there are any further questions or if assistance is needed, please call me on (202) 512-8387.
GAO examined the U.S. Postal Service's proposals for modifying the postal ratemaking process, focusing on: (1) how the current ratemaking process could be improved; and (2) the effects of the 1970 Postal Reorganization Act on postal rates. GAO found that: (1) the Postal Service has petitioned the Postal Rate Commission to give it more flexibility in pricing postal products and establish a market-based mail classification schedule; (2) new Postal Service pricing mechanisms could minimize mail volume losses and keep rates lower for most mail classes; (3) Congress may have to clarify the 1970 ratemaking criteria because the Postal Service and Commission disagree on the extent that market forces impact postal rates; (4) postal ratemaking usually takes 10 months to complete and it does not include the time the Postal Service spends preparing rate cases and appeals; (5) proposed postal ratemaking reforms include developing accelerated procedures for market testing new products, establishing rate bands for competitive products, and allowing volume-based rates for high volume shippers; and (6) the Postal Service needs to be able to control labor costs and resolve workforce issues to remain competitive in the postal marketplace.
Prior to the 1970s, the federal government made housing affordable to low- and moderate-income households by subsidizing the production of privately and government-owned properties with below-market interest rate mortgages, direct loans, and other development subsidies. Under these production programs, the rent subsidies were project based, and tenants received assistance only while living in the subsidized units. In the early 1970s, concerns were raised about the effectiveness of these programs: Many moderate-income tenants benefited from federal assistance, while lower-income families did not; federal costs for producing the housing exceeded the private sector costs to produce the same services; and allegations of waste surfaced. Interest in a more cost-effective approach led Congress to explore options for using existing housing to shelter low-income tenants. Section 8 of the Housing and Community Development Act of 1974, as amended, authorized programs that reflected both approaches—a tenant-based rental certificate program (now called the voucher program) for use in existing housing and a project-based program. The project-based program comprises multiple subprograms, including Section 8 New Construction/Substantial Rehabilitation, Loan Management Set-Aside, and Property Disposition. Appendix III contains detailed descriptions of these subprograms. The voucher program provides vouchers to eligible households to rent houses or apartments in the private market from landlords who are willing to accept the vouchers. Voucher holders are responsible for finding suitable housing that complies with HUD’s housing quality standards. The voucher program pays the difference between the lesser of the unit’s gross rent or a local “payment standard,” and the household’s payment, which is generally 30 percent of monthly income, after certain adjustments. To be eligible to apply for assistance, households must have very low incomes—less than or equal to 50 percent of area median income (AMI) as determined by HUD. Under the provisions of the Quality Housing and Work Responsibility Act of 1998 (P.L. 105-276), at least 75 percent of new participants in the voucher program must be households with extremely low incomes—at or below 30 percent of AMI. Households already participating in the voucher program remain eligible for assistance as long as their incomes do not rise above 80 percent of AMI. The voucher program is administered by over 2,500 state and local PHAs that are responsible for inspecting dwelling units, ensuring that rents are reasonable, determining households’ eligibility, calculating households’ payments, and making payments to landlords. HUD provides funding to PHAs for administrative expenses as well as rental subsidies. The project-based program subsidizes rents at properties whose owners have entered into contracts with HUD to make rents affordable to low-income households. Often these properties were financed with mortgages insured or subsidized by HUD or with bonds issued by state and local housing finance agencies. Property owners and managers are responsible for administering the program at about 22,000 properties nationwide. The project-based program operates much like the voucher program, paying the difference between a HUD-approved unit rent and the household’s payment, which is generally equal to 30 percent of adjusted monthly income. In general, only households with low incomes (i.e., at or below 80 percent of AMI) are eligible for assistance, and since 1998 at least 40 percent of new residents must have extremely low incomes. Private property owners and managers have requirements similar to those for PHAs for administering the project-based program—they must ensure that households meet program eligibility requirements and must calculate households’ payments. HUD pays rent subsidies directly to the property owners but does not pay them a separate administrative fee, as the owners’ administrative costs are reflected in the HUD-approved rents. However, because of limited staff resources and the large number of project-based Section 8 contracts, HUD pays contract administrators (state and local PHAs) administrative fees to oversee most of the contracts, a task that requires processing monthly payment vouchers, reviewing property owners’ tenant information files, and addressing health and safety issues. Each year, Congress appropriates budget authority to cover the costs of new Section 8 contracts, renewals of expiring contracts, amendments to existing project-based contracts, and administrative fees. For the period covered by our review (1998 through 2004), Congress appropriated funds for the Section 8 programs in HUD’s Housing Certificate Fund account. Over time, Congress has changed the way it funds the Section 8 programs. From 1974 to 1983, Congress made large up-front appropriations to cover the projected costs of multiyear Section 8 contracts. Initially, voucher contracts were written for 5 years and were renewable, at HUD’s discretion, for up to 15 years, while the terms for project-based contracts ranged from 15 to 40 years. When these initial contracts began to expire in 1989, HUD required new budget authority to renew them. Owing to budget constraints, Congress funded Section 8 contracts with amounts that led to shorter contract terms. HUD initially renewed expiring contracts generally for 5-year terms but starting in the mid-1990s switched to 1-year terms for the voucher program and either 1- or 5-year terms for the project-based program. The Section 8 programs are not entitlements, and as a result, the amount of budget authority HUD requests and Congress provides through the annual appropriations process limits the number of households that Section 8 can assist. Historically, appropriations for the Section 8 programs (as well as for other federal housing programs) have not been sufficient to assist all households that HUD has identified as having housing needs—that is, households with very low incomes that pay more than 30 percent of their income for housing, live in substandard housing, or both. According to HUD data for calendar year 2003, Section 8 and other federal housing programs assisted an estimated 4.3 million households, or 27 percent of all renter households with very low incomes (see fig. 1). HUD estimated that over 9 million very low income households (about 59 percent) did not receive assistance and had housing needs. Of these 9 million households with housing needs, over 5 million had what HUD terms “worst case” needs—that is, they paid over half of their income in rent, lived in severely substandard housing, or both. The combined number of authorized vouchers and project-based units grew from about 2.93 million to 3.36 million from 1998 through 2004—an overall increase of about 15 percent and an average annual increase of about 2 percent (see fig. 2). Most of this increase occurred from 1998 to 2001, when about 327,000 vouchers were added. However, as figure 2 shows, this overall trend masked a difference in the trends for the individual programs: The number of vouchers grew by 31 percent during this period, while the number of project-based units declined by 5 percent. It is important to note that at any given time the actual number of households assisted with Section 8 programs is likely to be less than the number of authorized vouchers and project-based units, because some authorized vouchers and units may not be in use. For example, vouchers may go unused because households may not be able to find units that meet the program’s affordability requirements and quality standards. (As discussed subsequently in this report, the extent to which authorized vouchers are actually used to rent units—and thus incur subsidy costs—is called the voucher utilization rate.) Project-based units may not be in use during the period when landlords are seeking new occupants for units that have been vacated. From 1998 through 2004, the number of authorized vouchers grew from about 1.60 million to almost 2.09 million, an increase of 490,944 vouchers (see fig. 2). This increase represents an average annual growth rate of almost 5 percent. The new vouchers were composed of both “incremental vouchers” and tenant protection vouchers. Incremental vouchers are those that resulted from Congress’ decision to expand the program to serve more households. Notices published in the Federal Register and HUD data indicate that the agency awarded 276,981 incremental vouchers and 205,853 tenant protection vouchers from 1998 through 2004 (see table 1). Incremental vouchers consist of three major types: fair share, welfare-to-work, and special purpose. Fair share vouchers are those that HUD allocates to PHAs on a competitive basis using a formula that accounts for poverty rates, renter populations, vacancies, overcrowding, and other measures, in each county and independent city throughout the country. Welfare-to-work vouchers are designated for households for which a lack of stable, affordable housing is a barrier to employment and that are making the transition to economic self-sufficiency. Finally, special purpose vouchers include those designated for a variety of special needs populations, such as persons with disabilities. Fair share vouchers accounted for about 56 percent of the total, while welfare-to-work and special purpose vouchers represented 18 percent and 26 percent, respectively. From 1998 through 2002, Congress provided new funding each year for a large number of incremental vouchers to help address the unmet housing needs of very low-income households, and fair share vouchers were the key type of incremental vouchers used to increase the number of assisted households. Starting in 2003, Congress provided no new funding for fair share vouchers, but did provide new funding for a smaller number of special purpose vouchers. By 2004, however, no new funding was provided for any type of incremental voucher. Unlike incremental vouchers, tenant protection vouchers do not add to the total number of authorized units under Section 8 (and other HUD programs for which they are used) because they replace one form of HUD assistance with another. Tenant protection vouchers are offered to eligible households that had received housing assistance under various HUD programs (including the project-based program, certain HUD mortgage insurance programs, and public housing) before the assistance was terminated. As part of its annual budget request, HUD estimates the number of tenant protection vouchers it will need and the amount of funding required for these vouchers. As table 1 shows, the number of tenant protection vouchers awarded from 1998 through 2004 remained relatively stable, from a low of 22,839 in 2002 to a high of 36,000 in 2001. The number of authorized project-based units fell from 1.33 million to 1.27 million, a decline of approximately 62,000 units (see fig. 2). This represented an average annual decrease of less than 1 percent. The number of project-based Section 8 units declined primarily because either property owners or HUD decided not to renew Section 8 contracts. Owners may choose not to renew their contracts and to opt out of the program for a variety of reasons, including plans to convert the properties to market-rate rental units. HUD may decide not to renew some contracts if property owners have not complied with program requirements, such as maintaining the property in decent, safe, and sanitary condition. If a property owner or HUD decides not to renew a project-based Section 8 contract, the property is no longer required to comply with program rules, including affordability requirements. To protect Section 8 households from rent increases that may result when owners opt out of their contracts, HUD provides a special type of tenant protection voucher known as an enhanced voucher. Enhanced vouchers are designed to ensure that tenants can afford to remain in the properties that are no longer receiving project-based Section 8 assistance—even if the rents for these units exceed those for the regular voucher program (such vouchers are considered enhanced because they allow these higher subsidies). If HUD terminates a project-based Section 8 contract, the agency usually provides affected families with regular vouchers to allow them to find other housing. The substitution of tenant protection vouchers for subsidies previously paid for project-based units has helped minimize the net loss of Section 8 units. Although both budget authority and outlays for the Section 8 programs increased significantly from 1998 through 2004, the rates of growth differed. Appropriations of new budget authority grew more than twofold during this period (105 percent), partly because HUD needed more budget authority to cover the cost of renewing long-term contracts that began to expire in 1989. In comparison, from 1998 through 2004 total Section 8 outlays rose at a slower rate (50 percent). However, this increase masks substantial differences in the rates of growth for the individual Section 8 programs. Although HUD did not separately track outlays for the voucher and project-based programs during this period, we estimate that outlays increased by 93 percent for the voucher program and by 6 percent for the project-based program. Appropriations of new budget authority for Section 8 grew from $9.4 billion in 1998 to $19.3 billion in 2004, an overall increase of about 105 percent and an average annual rate of 13 percent (see fig. 3). During 2001, new budget authority grew by 22 percent, the largest single-year increase during this period. For the other years, the annual increase in new budget authority ranged from 10 to 17 percent. Over the same period, new budget authority for Section 8 accounted for an increasing share of HUD’s total annual appropriations, growing from 41 percent in 1998 to 54 percent in 2004. Part of the growth reflects the effects of inflation. After adjusting for inflation, new budget authority rose from $10.6 billion in 1998 to $19.3 billion in 2004 (82 percent). Appendix IV contains detailed information on budgetary costs in nominal and inflation-adjusted dollars. HUD did not separately track budget authority for the voucher and project-based programs for the period covered by our analysis. HUD budget officials told us they had no need to do so because Congress funded both programs under a single budget account, the Housing Certificate Fund. However, to provide better transparency and strengthen oversight of the programs, Congress directed HUD to create two new budget accounts—Tenant-Based Rental Assistance and Project-Based Rental Assistance—for all new Section 8 appropriations. Beginning with its 2006 budget, HUD has provided separate information for each program. The substantial growth in new budget authority stemmed primarily from decisions to renew expiring long-term Section 8 contracts. From 1974 to 1983, Congress made large up-front appropriations to cover the projected costs of multiyear Section 8 contracts that were written in those years. Because Congress and HUD funded these long-term contracts up front, they generally did not require new budget authority during the years specified in the contracts. During the early to mid-1990s, large numbers of these long-term contracts reached the end of their terms. Decisions to renew the contracts created the need for new budget authority. As figure 4 shows, the trend in the numbers of expiring contracts continued from 1998 through 2004. Specifically, the number of project-based units with expiring contracts that were renewed grew significantly—by 373,310 units from 1998 through 2004. (As noted previously, because some project-based contracts were not renewed, the total number of authorized project-based units declined during this period—even as the number needing new budget authority grew.) Additional new budget authority was required each year to cover the renewal of 818,095 vouchers from 1998 through 2004. A factor also contributing to the need for new budget authority was a declining amount of “carryover” budget authority. Carryover consists of unobligated budget authority (not yet committed to specific contracts), including funds that have been “recovered” (de-obligated from expired contracts that did not need all of the budget authority that had been obligated for them). Congress may rescind any portion of such unused budget authority and in fact enacted rescissions in the Section 8 program during each of the years we examined. Total budget authority available to renew Section 8 contracts in any year thus consists of both the carryover, net of rescissions, as well as new budget authority, and represents all of the funds available to HUD for future obligations and outlays. Typically, HUD has had large amounts of carryover funds in the Section 8 programs, and these carryover funds have helped offset the need for new budget authority. However, as shown in figure 5, the carryover amounts generally declined during the period we examined. For example, about $7.5 billion in carryover funds in 1998 lessened the need for new appropriations of budget authority in that year, whereas the decline in carryover funds in later years increased the need for new appropriations. Partly because of declining carryover amounts during this period, total available budget authority grew at a slower rate than new budget authority. More specifically, total available budget authority grew from $14.0 billion to $20.9 billion over this period (fig. 5), an average annual rate of about 7 percent. Congress rescinded between $1.6 billion and $2.9 billion each year during the period. As figure 3 shows, annual outlays for Section 8 programs grew from $14.8 billion in 1998 to $22.2 billion in 2004, an overall increase of about 50 percent and an average annual increase of 7 percent. About 78 percent of this growth occurred from 2002 to 2004, with 2003 representing the largest annual increase ($2.5 billion). Despite this growth, total Section 8 outlays accounted for a relatively stable share of HUD’s total outlays over this period, ranging from 45 percent in 1998 to 52 percent in 2004. Outlays for Section 8 generally exceeded new budget authority for the program each year from 1998 through 2004 (see fig. 3). This pattern resulted primarily from the way the program was originally funded. As noted previously, initial Section 8 contracts generally had long terms and received large up-front appropriations of budget authority to cover their projected costs. As a result, HUD has for many years—including the 1998 through 2004 period—made outlays for contracts that have not required new budget authority. During this period, the gap between outlays and new budget authority narrowed as the number of expiring vouchers and project-based units that required new budget authority grew and were renewed on an annual basis. If all Section 8 contracts had reached the end of their multiyear terms and were renewed annually, new budget authority requirements would more closely approximate the expected annual outlays. Since HUD did not separately track outlays for the voucher and project-based programs (for the same reasons it did not do so for budget authority), we developed our own estimates of outlays for both programs based on data from the accounting systems HUD uses to record Section 8 rental subsidy payments. On the basis of these data, we estimated that from 1998 though 2004: Outlays for the voucher program rose from $7.5 billion to $14.5 billion (fig. 6)—an overall increase of 93 percent and an average annual rate of increase of 12 percent. The largest annual increases—approximately 20 percent—occurred both in 2002 and 2003. About 56 percent of the total increase in outlays also occurred in these 2 years. As discussed in more detail in a subsequent section of this report, the growth in voucher program outlays resulted in large part from increases in the average rental subsidy per household and decisions by Congress to expand the number of vouchers. In contrast, outlays for the project-based program remained relatively stable, rising from $7.3 billion to $7.7 billion, or about 6 percent from 1998 through 2004—an average annual rate of about 1 percent. Because of its much faster rate of growth, the voucher program accounted for nearly all of the growth in total Section 8 outlays from 1998 through 2004. Specifically, the program accounted for about $7.0 billion (94 percent) of the $7.4 billion increase in total Section 8 outlays during this period. In contrast, the project-based program accounted for only $419 million (6 percent) of the overall increase in total Section 8 outlays. In 1998, the voucher and project-based programs each represented about half of the total outlays for the Section 8 programs. In a relatively short time span, voucher outlays surpassed those for the project-based program by a significant margin, and by 2004 the voucher program was responsible for about 65 percent of total Section 8 outlays. Outlays for the project-based program increased at a rate slower than inflation from 1998 through 2004. Specifically, after adjusting for inflation, outlays dropped from $8.3 billion to $7.8 billion, a decrease of 6 percent. The growth in voucher outlays, however, significantly outpaced the rate of inflation, increasing from $8.5 billion to $14.6 billion (71 percent) in inflation-adjusted dollars. Additional information on outlays in nominal and inflation-adjusted dollars appears in appendix IV. A number of policy decisions and market factors contributed to the growth in total Section 8 outlays from 1998 through 2004, including decisions to expand the number of households receiving vouchers, increases in the average rental subsidy per household, and other program costs. Figure 7 shows the general relationship between these policy decisions and market factors and Section 8 outlays. Although these factors also affected budget authority, our analysis focuses on outlays because, unlike budget authority, outlays occur when payments are made and thus reflect the actual annual cost of providing rental assistance. Congress and HUD have taken steps to limit further growth in Section 8 program costs—for example, by changing the program’s funding formula for vouchers. Decisions to increase the number of households receiving vouchers were a significant driver of growth in voucher outlays from 1998 through 2004. As noted previously, between 1998 and 2004 Congress authorized funding for a total of 490,944 incremental and tenant protection vouchers. This trend, coupled with a rise in the percentage of authorized vouchers in use (the utilization rate) that started in 2001, increased the number of assisted households and, in turn, the amount of outlays for vouchers. We estimate that about $3.0 billion (43 percent) of the increase in voucher outlays from 1998 through 2004 was attributable to the additional assisted households resulting from the authorization of new vouchers and higher utilization rates (table 2). Certain policy changes were designed to increase average voucher utilization rates. For example, starting in 2002, PHAs that applied for fair share vouchers had to maintain utilization rates of at least 97 percent to be eligible to receive them. Also, according to HUD, Congress’ decision in 2003 to limit the funding basis for voucher contracts to only vouchers that were actually in use effectively encouraged PHAs to increase their utilization rates in order to receive more funding. Using the average annual household subsidy in 2004 for the project-based program ($5,948), we estimate that the decline of about 62,000 units reduced project-based outlays by roughly $367 million (see table 2). However, this decrease was more than offset by the other factors, leading to an overall increase of $419 million. Although the decline in the number of project-based units caused outlays for the project-based program to be less than they would have been otherwise, its effect on total Section 8 outlays was offset to a large degree by the issuance of tenant protection vouchers to households displaced from their project-based units. As noted previously, under the project-based program (or other HUD programs), tenants in units receiving assistance that is terminated (e.g., because the unit owner decides not to renew an expiring contract) may face higher rental payments. To protect these tenants from potentially unaffordable rent increases and continue providing assistance, Congress made tenant protection vouchers available. In effect, outlays from the project-based program were shifted to the voucher program, although not on a one-for-one basis because the per household subsidy costs were different for project-based units and vouchers. Increases in the average rental subsidy per household also contributed to the growth in outlays for the voucher and project-based programs, although the average subsidy increased more for vouchers than for project-based programs. As figure 8 shows, the average subsidy for vouchers grew from $4,420 to $6,262 from 1998 through 2004, an overall increase of 42 percent. The annual rate of increase in the average per household subsidy for vouchers was 6 percent during the period, ranging from a low of 1 percent in 1999 to a high of 11 percent in both 2002 and 2003. The high rate of growth in 2002 and 2003 coincided with the largest yearly increases in voucher outlays (see fig. 6). For 2004, the annual rate of increase slowed to over 2 percent after several years of substantial growth. The growth in the number of enhanced vouchers, which, as previously noted, allows for higher subsidies, may have contributed to the overall increase. As described in table 2, an estimated $3.6 billion (51 percent) of the increase in voucher outlays was due to growth in the average rental subsidy per household. In comparison, the average rental subsidy per household for the project-based program grew more modestly during the period—from $5,305 to $5,948, an overall increase of 12 percent and an average annual increase of 2 percent. The annual rate of increase in average per household subsidy did not exceed 1 percent from 1998 through 2001 and remained at less than 4 percent from 2002 through 2004. As described in table 2, we estimate that this raised outlays for the project-based program by about $616 million. The decline in the number of project-based units partially offset this increase in program outlays, however. As figure 8 shows, during the period we examined, the per household subsidy in the voucher program was initially less than the project-based per household subsidy but then became greater. However, this trend does not mean that the project-based program has become more cost-effective. Any comparison of the cost-effectiveness of these programs should account for all subsidies received during the properties’ life cycles, adjusted for any differences in unit and household characteristics, such as the number of bedrooms and family size. For example, the average project-based subsidy per household during the period we examined did not account for the effects of past subsidies or for potential future subsidies that may be needed to maintain properties in the program. Similarly, it is important to note that the nationwide trends we present do not reflect the considerable variation that exists across local rental housing markets. That is, even during the period we examined, in some markets the per household subsidy for vouchers may have remained below that for the project-based program. For both the voucher and project-based programs, many policy decisions and market factors influenced the average per household rental subsidy, such as HUD’s fair market rent (FMR) determinations, housing market conditions, household incomes, and policies for limiting the cost of rental assistance. More detailed information on the trend in the average rental subsidy per household and the specific impact of these factors on per household rental subsidies for vouchers are discussed in a subsequent section of this report. Other costs for program administration and special programs contributed to the change in outlays for the voucher and project-based program, although to a lesser extent than the other factors (see table 2). More specifically, according to data from HUD’s accounting systems, administrative costs for vouchers increased by about $368 million from 1998 through 2004. Although complete data on administrative costs for the project-based program were not available, a major administrative expense was HUD’s Performance-Based Contract Administrator initiative, which started in 2000. This initiative, intended to augment HUD’s oversight of project-based Section 8 contracts, added $170 million in outlays from 1998 through 2004. According to HUD, outlays for special programs increased but were relatively small during the period covered by our analysis. There have been multiple special programs, including the Family Self-Sufficiency program, which paid for service coordinators to help participating families achieve economic independence. The Family Self-Sufficiency program accounted for about $50 million in outlays in 2004. Since detailed data on the outlays for special programs were not readily available for this period, we were unable to comprehensively estimate their impact on outlays. HUD has implemented measures to limit increases in the cost of the Section 8 programs. For example, as noted previously, in 2003 Congress authorized changes to HUD’s policies for funding vouchers to slow the growth in new budget authority and, in turn, outlays. Before 2003, Congress appropriated budget authority using a unit-based approach that covered all vouchers authorized in each contract, whether or not all of the vouchers had been utilized. Concerned that appropriations were exceeding actual program needs, Congress changed the formula for funding voucher contracts to a dollar-based approach, basing it on actual expenditures from the previous year plus an inflation factor. In addition, Congress authorized a contingency fund to cover increases in rental costs in excess of the inflation factor. In HUD’s 2004 budget, Congress authorized the creation of a Quality Assurance Division within HUD to provide more oversight of the administration and cost of the voucher program. A key part of this effort involves monitoring and verifying program costs reported by PHAs. The division also audits PHAs’ program records to ensure that voucher costs were reported accurately and monitors local rental market trends to determine whether HUD’s FMRs were set too high or too low. In addition, quality assurance staff review PHAs’ compliance with HUD’s requirement that rents for voucher units be reasonable—that is, comparable to rents for similar unassisted units in the market. Congress and HUD have taken further steps since the period of our analysis to limit cost growth. For example, Congress made further changes to the voucher’s dollar-based formula in 2005 that eliminated all contingency funding, so that PHAs were expected to absorb all additional cost increases during the year. To help PHAs keep their costs within their funding levels, HUD issued guidance in 2005 concerning options PHAs could exercise to limit costs. These options included the following. Reduce payments standards: Because PHAs may set their own payment standards—that is, the maximum rent that can be used to calculate rental subsidies—anywhere between 90 and 110 percent of the FMR for their area, reducing payment standards allow PHAs to limit growth in rental subsidy payments. Ensure reasonable rents: Statute and HUD regulations require PHAs to compare rents for voucher units to those for comparable unassisted units and reduce rents for voucher units if warranted. To ensure that rents are reasonable, PHAs can conduct more frequent reviews of rents charged by landlords. Any rent reductions would reduce the rental subsidy payments that PHAs make. Deny moves within and outside PHA jurisdiction: The voucher program allows households to move anywhere within and outside of a PHA’s jurisdiction. However, if a PHA has insufficient funding, it can deny a voucher household’s move to an area that would result in higher subsidy costs—for example, an area with a higher payment standard. Not reissue vouchers or terminate assistance: Vouchers can become available to new households when assisted households leave the program (turnover). To limit costs, PHAs can choose not to reissue turnover vouchers or pull back outstanding vouchers for other unassisted households searching for housing. PHAs can also terminate assistance if they determine that the funding provided by HUD is insufficient, although according to HUD, the department is not aware of any instance in which a PHA has terminated voucher assistance. Set higher minimum rents: HUD policy allows PHAs to set a minimum rent for households that can range from as low as $0 to as high as $50. Some PHAs currently allow certain households with very little income to pay rents that are below the minimum rent ceiling (i.e., less than $50). To reduce their costs, these PHAs can raise the minimum rent to $50. Furthermore, HUD supports proposed legislation—the State and Local Housing Flexibility Act of 2005—that would replace the existing voucher program with the “flexible voucher program.” This proposed program would, among other things, allow individual PHAs to set (within broad federal guidelines) eligibility requirements, the maximum period that a household could receive assistance, and households’ contributions toward rents. According to HUD, this proposed program, which would initially continue to fund vouchers using the dollar-based approach, would create incentives and provide flexibilities for PHAs to manage their funds in a cost-effective manner. For the project-based program, Congress has taken steps to control the cost of rental subsidies, and as our analysis shows, these steps have limited growth in the program’s average rental subsidy per household and thus in outlays. In 1997, Congress passed the Multifamily Assisted Housing Reform and Affordability Act, which established the Mark-to-Market program. When properties entered the project-based program in the late 1970s through the mid-1980s, HUD often subsidized rents that were above local market levels to compensate for high construction costs and program-related administrative expenses. Thereafter, these rents were adjusted annually using an operating cost factor determined by HUD. In the early 1990s, HUD concluded that the continued growth in subsidy levels would be unsupportable within HUD’s budget limitations. The Mark-to-Market program, which began in 1998, authorized HUD to reduce rents to market levels on project-based properties with HUD-insured mortgages. According to HUD, the program has reduced project-based rental subsidy costs at over 2,700 properties by an estimated $216 million per year since 2000. We developed a statistical model to assess the impact that certain variables—specifically, market rents, payment standards, household incomes, and household and neighborhood characteristics—had on the change in the average rental subsidy per household for the voucher program. Changes in market rents explained a significant part of the increase in the average rental subsidy per household. Specifically, we estimate that from 1999 through 2004, over one-half of the increase in the average per household subsidy was explained by higher market rents, all other things being equal. Higher payment standards and the relatively slow growth in household incomes also contributed to the increase. Although we found that household and neighborhood variables were important determinants of per household rental subsidies, their average values did not vary enough from 1999 through 2004 to cause a significant change in the average per household rental subsidy over this period. Because voucher households rent units in the private market, trends in market rents have a major effect on per household rental subsidies. To assess the impact of market rents on per household rental subsidies, we used HUD’s FMRs as indicators of local market rents. Our model estimated the average per household subsidy that HUD paid in each year (baseline estimate). We then used the model to estimate the average per household subsidy HUD would have paid in each year, had the average market rents remained at the 1999 level, adjusted for overall price level changes. Comparing this figure with the baseline estimate indicates the influence of changes in rents. We estimate that from 1999 through 2004 the average annual rental subsidy per household would have grown from $5,225 to $5,800 (an increase of 11 percent), if the average market rents had remained at 1999 levels, compared with the 24 percent growth, from $5,225 to $6,478, in the baseline estimate (fig. 9). Expressed differently, the effect of market rents accounted for over half of the increase in the average per household subsidy, all other things being equal. In 1998, the Quality Housing and Work Responsibility Act (P.L. 105-276) authorized PHAs to set local payment standards anywhere between 90 to 110 percent of the FMR without the need for prior HUD approval. This flexibility was intended to make it easier for voucher households to find housing successfully, reduce concentrations of poverty by helping voucher households find housing in neighborhoods with higher incomes, and allow PHAs to respond to local market conditions. The result of this policy was that the average payment standard, as a percentage of the FMR, increased from about 96 percent in 1999 to 103 percent in 2004. The average voucher rent as a percentage of the FMR also increased, rising from about 94 percent in 1999 to 97 percent in 2004 (see app. VI for detailed discussion of the trends in voucher rents). To assess the impact of higher payment standards on the change in per household rental subsidies, we compared our baseline estimate with the average per household subsidy that our model predicted HUD would have paid in each year had the average payment standard, as a percentage of the FMR, remained at its 1999 value. As shown in figure 10, we estimate that over this period the average per household subsidy would have grown from $5,225 to $6,169 (an 18 percent increase) if the average payment standard as a percentage of the FMR had remained at the 1999 level, compared with the 24 percent growth, from $5,225 to $6,478, in the baseline estimate. Further, we estimate that the impact of higher payment standards accounted for about one-quarter of the increase in the average per household subsidy from 1999 through 2004, all other things being equal. Slow growth in household incomes, which did not keep pace with the increases in market rents, also contributed to higher per household rental subsidies. Specifically, from 1999 through 2004, the average income of voucher households grew from $8,779 to $10,086, an overall increase of 15 percent and an average annual rate of about 3 percent. However, market rents, as measured by FMRs, increased by about 23 percent over this period, or an average annual rate of over 4 percent. To determine the impact of household income on the change in per household rental subsidies, we compared the baseline estimate with the estimated amount that our model predicted HUD would have paid had the average household income grown at the same rate as the average market rent. As shown in figure 11, we estimate that over this period the average per household subsidy would have grown from $5,225 to $6,279 (an increase of 20 percent) if the average income had grown as fast as the average market rent, compared with the 24 percent growth, from $5,225 to $6,478, in the baseline estimate. Further, we estimate that the effect of relatively slow growth in the average household income accounted for about 16 percent of the increase in the average per household subsidy, all other things being equal. We analyzed certain household characteristics, such as family size, family types (for example, whether the household was headed by an elderly person or a person with a disability), and others, and found that, while they were major determinants of per household rental subsidies, they did not vary enough over this period to effect significant change in the average per household rental subsidy. Stated differently, these factors exhibited about the same influence on per household voucher subsidies throughout the period, and thus do not help explain the overall trend of increased rental subsidy. In addition, we analyzed the characteristics of the neighborhoods—also important determinants of per household subsidies—where voucher holders live. Specifically, given the significant increases in voucher rents and payment standards, we explored the extent to which the increase in the average per household subsidy was the result of voucher households moving to neighborhoods with less poverty and other favorable characteristics. However, just as with the household characteristics, the average values of these variables did not vary enough from 1999 through 2004 to cause a substantial change in the average per household rental subsidy over this period. Because we did not have comprehensive data on the quality of rental units in the voucher program, we could not explore whether the trends in higher voucher rents and payment standards were also accompanied by changes in the quality of units occupied by voucher holders. The cost of providing rental assistance has been a long-standing issue for policymakers and has led Congress, on different occasions, to reform various housing programs. Recent proposals for reform have focused on the voucher program, which experienced a significant growth in outlays and constituted nearly all of the increase in total Section 8 outlays from 1998 through 2004. We found that the growth both in the number of assisted households—driven largely by policy decisions to expand this nonentitlement program—and in the average rental subsidy per household explain much of the increase in voucher outlays over this period. In turn, the average per household subsidy rose in large part because of changes in the rental market, use of higher payment standards by PHAs, and household incomes that grew more slowly than rents. To the extent that policymakers wish to stem the rising cost of the voucher program, our analysis suggests that future increases could be mitigated by reducing the number of assisted households, lowering payment standards, requiring households to pay a larger share of their incomes toward rent, subsidizing households with higher incomes, or a combination thereof. However, these actions require making difficult trade-offs between limiting program costs and achieving long-standing policy objectives, such as serving more needy households, having assisted households pay a relatively small share of their incomes in rents, making it easier for voucher holders to find housing (especially in tight rental markets), reducing the concentration of poverty, and giving PHAs the flexibility to respond to local rental market conditions. Congress and HUD have already responded to the increasing cost of vouchers by changing the way the program is funded. Specifically, HUD no longer provides funding to PHAs based on the number of authorized vouchers, but rather based on the prior year’s level of voucher expenditures, adjusted by an inflation factor. While this approach allows HUD to limit the annual rate of increase in the program’s cost, it does not directly address the policy decisions and market factors that we identified as contributing to the increase in program costs. Instead, it will be up to PHAs to exercise their flexibilities and make decisions regarding how to use the voucher funding that they receive from HUD. For example, some PHAs may choose to reduce their local payment standard, a course that, as our analysis suggests, would likely limit growth in voucher costs. The decisions that PHAs make will eventually influence trends in outlays, per household subsidies, and unit rents, and these trends will become more apparent in the years following the period covered by our analysis. We provided HUD with a draft of this report for review and comment. In a letter from the Acting Deputy Chief Financial Officer (see app. VII), HUD suggested technical clarifications, which we incorporated where appropriate, and made the following comments: HUD noted that the draft report’s discussion of efforts to limit growth in program costs did not cite the department’s recent legislative proposal—the State and Local Housing Flexibility Act—to reform the voucher program. The proposal’s primary mechanism for limiting cost growth is the continued implementation of a dollar-based approach for funding the voucher program. Our draft report discussed the dollar-based approach and its intended impact on program costs. However, in response to HUD’s comment, we added language to the final report describing the legislation’s key provisions and objectives. HUD indicated that the draft report was incorrect in stating that to be eligible for assistance under the voucher program, households must have very low incomes—less than or equal to 50 percent of AMI. HUD said that households must have low incomes—less than or equal to 80 percent of AMI—to be eligible. The income limit that HUD referred to generally applies to households already participating in the voucher program. The income limit cited in our draft report referred to the eligibility criteria for new applicants. We revised the final report to make this distinction clearer. HUD said that our draft report’s discussion of the growth in appropriations from 1998 through 2004 that was due to expiring Section 8 contracts may have inadvertently cited 1989 (rather than 1998) as the year in which contracts began to expire. Based on our analysis of prior studies on this issue, 1989 is generally regarded as the year in which Section 8 contracts started to expire. Contracts that expired, and were renewed with shorter terms in 1989 and afterwards, required new appropriations for renewals in subsequent years, including the years covered by our analysis. Accordingly, we made no changes to the final report. Finally, HUD stated that the draft report did not mention a critical reason that the lower cost per unit in project-based programs did not imply greater cost effectiveness—specifically, that vouchers are used for units that, on average, have more bedrooms and serve larger households than project-based units. In response to HUD’s comments, we revised the final report to reflect the fact that determining the cost-effectiveness of HUD’s housing programs must account for not only all subsidies received over time but also unit and household characteristics. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the report date. At that time, we will send copies to the Secretary of Housing and Urban Development and other interested congressional committees. We will also make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512-8678 or [email protected]. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VIII. This report provides information on trends in the size and cost of the Department of Housing and Urban Development’s (HUD) Section 8 program from 1998 through 2004. Specifically, our report objectives were to determine (1) the annual numbers of vouchers in the voucher program and units in the project-based programs, (2) the annual new budget authority and outlays for each program, (3) the factors that have affected outlays, and (4) the impact of factors on the average rental subsidy cost per household for the voucher program. To determine the annual numbers of vouchers in the voucher program and units in the project-based program, we obtained and reviewed data on the numbers of authorized vouchers and project-based units from 1998 through 2004 from HUD’s budget office. We compared the annual numbers of vouchers and project-based units that HUD provided with information reported in the agency’s annual budget requests to ensure that they were consistent. We obtained data on the number of units authorized under the Section 8 Moderate Rehabilitation program from HUD’s program offices. We compiled and analyzed HUD notices of funding announcements and awards published in the Federal Register to determine the different types of new vouchers that were added to the program. To determine the annual amount of new budget authority and outlays for each program, we obtained and analyzed data from HUD’s budget office, annual budget requests and other budget documents, and audited financial statements. We also reviewed relevant prior reports from HUD, HUD’s Office of Inspector General (OIG), the Congressional Budget Office (CBO), and the Congressional Research Service (CRS). Because HUD’s budget office was not able to report data on outlays for the voucher and project- based programs separately, we obtained data on rental assistance payments from HUD’s accounting systems and estimated the amount of rental assistance payments paid to public housing agencies (PHA) and property owners under each program from fiscal years 1998 through 2004. Specifically, from the HUD Central Accounting and Program System (HUDCAPS), we obtained information on rental assistance payments and other expenses for the voucher and the Section 8 Mod Rehab program, as well as for a limited number of contracts for the project-based program. From HUD’s Program Accounting System (PAS), we obtained similar information for the remaining project-based Section 8 contracts. In total, the data we used comprised approximately 3 million payment records. Our analysis included payment records associated with the voucher and project-based programs only and did not include payment records for other HUD rental assistance programs, such as the Section 202 Supportive Housing for the Elderly and Section 811 Supportive Housing for Persons with Disabilities programs. We included payment records for certain administrative expenses, such as fees paid to PHAs for the voucher program and to Performance-Based Contract Administrators for the project-based program. We compared our estimate of outlays for the voucher, project-based, and Mod Rehab programs and other related expenses (total outlays) with published totals in HUD’s annual budget requests. Our estimates using HUDCAPS and PAS were, on average, 0.7 percent less than the totals in HUD’s annual budget requests. For 1998 and 1999, our estimate of total outlays varied from the published totals by -1.2 percent and -4.2 percent, respectively. For 2000 through 2004, our estimates of total outlays were within 0.4 percent. One reason for the variation between our estimates and the published totals is that our analysis did not include certain nonrental assistance activities paid for with Section 8 funds. In order to assess the reliability of the data from HUDCAPS and PAS, we reviewed related documentation and interviewed agency officials who work with these databases. In addition, we performed internal checks to determine the extent to which the data fields were populated and the reasonableness of the values contained in the fields. We concluded that the data were sufficiently reliable for the purposes of this report. To identify the factors that have affected outlays, we analyzed our reports and reports by HUD, CBO, CRS, transcripts of congressional committee hearings, and congressional committee reports. We also obtained and analyzed data on rental subsidies per household, a key factor affecting outlays, from two HUD databases—the Public and Indian Housing Information Center (PIC) for the voucher program and the Tenant Rental Assistance Certification System (TRACS) for the project-based program. Using these data, we analyzed trends in unit rents, household incomes, and household rental payments. In order to assess the reliability of the data from PIC and TRACS, we reviewed related documentation and interviewed agency officials who work with these databases. In addition, we performed internal checks to determine the extent to which the data fields were populated and the reasonableness of the values contained in the fields. We concluded that the data were sufficiently reliable for the purposes of this report. To assess the impact of different factors on the average rental subsidy cost per household for the voucher program, we developed a statistical model using data from HUD and the Census Bureau. Specifically, we obtained household-level data from PIC on the rental subsidies per household, unit rents, household incomes, various demographic characteristics, and geographic information about where households were located. We also incorporated information from the 2000 Decennial Census and HUD on neighborhood characteristics at the census tract level. Our model allowed us to estimate the effect of each variable—market rents, household incomes, household and neighborhood characteristics, and a measure of the relationship between the payment standard and HUD’s fair market rent—on the average rental subsidy per voucher household, while controlling for other variables. The PIC data for 1998 did not have complete information for certain fields (such as the fair market rent associated with an individual household), and consequently, we did not include data for 1998 in our model. Appendix V contains further information on the results of our statistical analysis. To address all of the objectives, we interviewed officials from HUD’s Offices of the Chief Financial Officer, Public and Indian Housing, Housing, and Policy Development and Research. We also met with CBO and CRS officials and representatives of various industry and research groups: the Center for Budget and Policy Priorities, the Council of Large Public Housing Authorities, the National Leased Housing Association, and the National Low Income Housing Coalition. We conducted our work in Washington, D.C., and Chicago, Illinois, from April 2005 through March 2006 in accordance with generally accepted government auditing standards. This appendix provides information on the Section 8 Moderate Rehabilitation (Mod Rehab) program. The Mod Rehab program was created in 1978 to add to the existing stock of assisted housing. It did this by providing funding to upgrade a portion of the estimated 2.7 million then- unassisted rental housing units with deficiencies that required a moderate level of repair, and rental subsidies for low-income households to live in them. Congress funded no new contracts for the Mod Rehab program after 1989 and repealed the program in 1991. Under annual contracts with public housing agencies (PHA) that administer the Mod Rehab program, HUD provides the funding for rental subsidies as well as an administrative fee to the agencies. The administering agencies, in turn, enter into contracts with property owners. Under these contracts, property owners rehabilitate their housing units to meet HUD’s standards for housing quality by completing repairs costing at least $1,000 and make the rehabilitated units available to eligible households. In exchange, PHAs screen applicants for eligibility and pay the difference between the approved contract rent and the household’s portion of the rent. The Mod Rehab has features that are common to both the project-based and voucher programs. For example, similar to the voucher program, the Mod Rehab program is administered by PHAs and was intended to utilize the existing stock of privately owned rental housing. However, Mod Rehab is fundamentally a project-based program because the rental subsidy is tied to a specific unit, not the household. During the 11 years that Congress funded new contracts under the Mod Rehab program, the term for the Section 8 contracts was 15 years. When the oldest of these contracts began to expire in 1995 and 1996, HUD instructed PHAs to replace them with vouchers. Since fiscal year 1997, however, HUD has renewed expiring contracts on an annual basis if the owners opt to do so and the properties consist of more than four rental units. As shown in table 3, the Mod Rehab program has undergone significant reductions in the number of units—from 71,659 in 1998 to 34,141 in 2004, a decline of about 52 percent. As with project-based Section 8, owners of Mod Rehab properties can choose to leave the program upon contract expiration, and in these cases, eligible households can receive enhanced vouchers. Data on budget authority for the Mod Rehab program were not available separately. From 1998 through 2004, HUD received budget authority for the Mod Rehab program as part of the overall appropriations for Section 8 in the Housing Certificate Fund account. Starting in its 2006 budget request, HUD included renewal funding for the Mod Rehab program in its Project- Based Rental Assistance budgetary account. Similarly, data on Mod Rehab outlays were not available. However, as we did for the voucher and project- based programs, we estimated Mod Rehab outlays using data from HUD’s accounting systems. As table 4 shows, from 1998 through 2004, estimated Mod Rehab outlays decreased by over 50 percent, from $472 million to $246 million. The decrease in outlays was due to significant reductions in the number of units assisted under the program. Federal rental housing assistance, which began with the enactment of the U.S. Housing Act of 1937, includes subsidies to construct new affordable housing and to make rents affordable in existing rental housing. From 1937 through 1974, the emphasis was almost exclusively on new construction. Questions about the cost-effectiveness of new construction led Congress to explore options for using existing housing to shelter low-income families. In 1974, it added Section 8 to the U.S. Housing Act of 1937 and created the Existing Housing Certificate program, the first major program to rely on existing privately owned rental housing and to provide tenant-based, rather than project-based, assistance. Another type of Section 8 assistance, the voucher program, started as a demonstration program in 1983, was made permanent in 1988, and operated simultaneously with the certificate program until 1998. At that time, the two programs were consolidated into the Housing Choice Voucher program, which combined features of both earlier programs. This program is now the largest federal housing assistance program. Table 5 summarizes the Section 8 rental housing assistance programs, including their authorization date and current status. This appendix provides detailed data on total available budget authority and outlays for the Section 8 programs. Since we are evaluating budget trends over a 7-year period, we present the budgetary data in both nominal (current) and inflation-adjusted dollars. We use the gross domestic product (GDP) index to adjust for inflation and 2004 as the reference year. This appendix provides an overview of the econometric analysis we used to investigate trends in Section 8 rental subsidies per household (housing assistance payments, or HAP) between 1999 and 2004 for the voucher program. These subsidies, which make up the difference between households’ payments (usually 30 percent of adjusted income) and the actual unit rent, are limited by the payment standards set by local public housing agencies. PHAs set these payment standards based in part on fair market rents (FMR) that the Department of Housing and Urban Development (HUD) establishes for individual housing markets, generally at the 40th percentile (in some cases 50th percentile) of the distribution of rents. Raising the payment standard relative to the FMR can provide assisted households with a wider choice of housing, but renting more expensive units raises both the cost of the subsidies and thus of the Section 8 programs. Because of the potential influence on program costs, we wanted to investigate the role of HUD and PHA policies in setting payment standards. Since 1998, PHAs have had more leeway than they did previously to increase (or decrease) payment standards relative to the FMR. According to HUD, this authority has been exercised too generously and is a major cause of the recent increase in HAPs. We developed a pooled cross-section time-series model explaining monthly HAPs as depending on a variety of housing market, program, and household characteristics. The results and descriptive statistics are based on a 10 percent sample of voucher (and certificate) household records obtained from HUD’s Public and Indian Housing Information Center files. These files provide snapshots of the program as of the end of each calendar year from 1998 through 2004 and provide information on HAPs, gross rents, FMRs, and payment standards as well as household income and other characteristics. The information in a file on a particular assisted household is current as of a point in time—for instance, the date of a program action, usually the date of an annual recertification for program eligibility. HUD’s Office of Policy Development and Research worked with the underlying administrative files to (1) correct various coding errors and inconsistencies, (2) identify the census tract of each household based on tenants’ addresses, and (3) add information of analytical interest that was not necessarily required for program administration. The date of admission to the program and the date of the program action were used to measure how long the household had been in the program, and other fields were used to indicate any change in the household’s rental unit and whether the household left the program. We used information on the household’s census tract to identify neighborhood characteristics. We also used the census tract information to develop an indicator of neighborhood quality by determining whether the voucher household’s census tract was a HUD-designated qualifying census tract (QCT). We excluded observations with extreme or missing values for key variables, and we excluded duplicate observations in the latest record and the record from the previous year. We also excluded households that appeared to have entered or left the program more than once. We placed each household in one of four categories, based on demographic and labor market characteristics: single female-headed households with children (nonelderly, nondisabled), elderly (including elderly disabled), nonelderly disabled, and all other. Because groups could face different housing and labor market conditions and the variables in our model could have different effects on the level of HAP in each group, we estimated the same model separately for each of the four categories. For instance, disabled households are typically smaller than other households but may require housing with features not commonly available in the general rental stock. Families with children may be larger than other families, and thus require larger units, and may also experience changes in labor market incomes. The purpose of the model is to explain monthly HAPs using an estimating equation that is based on a variety of household, housing, neighborhood, and policy factors. HAPs range from close to zero to the thousands of dollars, with variations in each cross section and over time. In the model, HAPs are explained by the general level of market rents, tenant incomes, a measure of neighborhood quality, time period, and a measure of PHA payment standard policy. We also included in our model a series of explanatory dummy variables for household size, duration in program, termination and moves, and metropolitan areas. All dollar amounts (e.g., HAP, market rents, adjusted income) are expressed in 2004:Q4 terms using the price index for Personal Consumption Expenditures from the Bureau of Economic Analysis. Because gross rents are important in defining the level of HAPs, we control for the general level of market rents in order to examine the effects of other variables. We use the FMR for this purpose because it provides considerable variation within cross sections and across time. The level of income is also important in determining the level of HAPs, and we used adjusted income as reported in the file. This choice is potentially problematic, as the level of HAPs may influence income by encouraging program participants to seek work or not. However, this problem is somewhat mitigated by the fact that adjusted income is a predetermined rather than an actual amount. Specifically, the adjusted income reported in the file is the PHA’s projection of a household’s income in the upcoming year based on income information from the previous year, taking into account expected changes in hours, wages, and labor force status. Finally, the file did not include information concerning household characteristics, such as occupation, education, and experience, that would help explain variations in assistance payments at the individual household level. The policy variable of interest relates to the way PHAs set payment standards (relative to the FMR). We define a ratio variable to measure this policy by calculating the average payment standard and average FMR by year and bedroom size for each PHA and then calculate the ratio (1 = 100) of the year-specific, PHA-specific payment standard to the FMR. Missing payment standard information were set equal to the FMR for a value of 100. (To limit the effects of outliers, we excluded from the analysis those households with payment standard ratios of less than 75 and more than 120.) The baseline specification uses the year-specific, PHA-specific payment standard to FMR ratio as a continuous variable (also truncated at 75 and 120). Neighborhood quality is measured in two ways, both of them based on the household’s census tract. Our base specifications use HUD-designated QCTs, which are in less desirable neighborhoods than other tracts. Thus rents and HAPs should be lower in those neighborhoods, given that the market rent variable distinguishes higher-rent markets from lower-rent markets. Because the same households are in the data set for many years, up to as many as six times, the error terms are not likely to be independent from each other to the extent that unobserved characteristics may make the error terms for each household correlated with each other. However, to the extent that this presents a problem with the confidence intervals around a coefficient estimate (rather than a point) estimate, we believe that this is mitigated to a large extent by the large sample sizes used in the estimation. Table 11 shows the mean values of the variables included in our statistical model for the whole period from 1999 through 2004. The results of our regressions are reported in table 12. Unless reported separately in parentheses, all P-values are less than 0.0001. In general, the results are consistent with our general expectations. For example, HAPs increase with market rent levels and decrease with adjusted incomes. Households in less desirable neighborhoods, as measured by the QCT variable, are about $20 to $30 per month less ($240 to $360 annually), depending on the group. Smaller households receive smaller HAPs, and those in the program longer receive smaller HAPs. Those that ultimately leave the program receive smaller HAPs, in some cases because incomes may have increased to the point that the households are no longer eligible. Households that move to a new unit tend to receive higher HAPs. HAPs increase as the payment standard increases relative to the FMR. The time period dummy variables used in our model suggest that, at least for households that are neither elderly nor disabled, HAPs were approximately $40 to $50 per month ($480 to $600 annually) higher in 2004 than in 1999, even after controlling for changes in market rent levels and payment standards. To present the results in terms of trends, we focused on those variables for which the average values changed significantly over the time period. Table 13 presents averages of selected variables—HAP, market rents, adjusted income, and payment standard ratio—for the largest group (single female- headed households with children). The rental subsidy per household of both Section 8 programs is the difference between a household’s payment and the lesser of either the payment standard or the unit’s gross rent. Trends in rents and household payments, therefore, drive changes in the rental subsidy per household. For the voucher program, average rents grew by 35 percent from 1998 through 2004 (fig. 12). The average annual increase in voucher rents was 5 percent during this period, ranging from a low of 3 percent in 2004 to 8 percent in 2002. Average project-based rents grew by 12 percent over this period, an average annual rate of 2 percent. Rents in the voucher program grew almost three times faster than those in the project-based program (35 percent versus 12 percent) over this period. A major reason for this difference is that voucher rents are determined by the private market, while project-based rents are adjusted annually using a HUD-determined operating cost factor. Annual increases in household payments did not keep pace with the increases in voucher rents. Specifically, the average household payment by voucher households rose by 24 percent over this period and grew at an average annual rate of 4 percent (fig. 13). The disparity in the rates of increase between rents and household payments accelerated the growth in the average per household subsidy for vouchers. In contrast, the annual rate of increase in the average project-based rent was similar to that of household payments. As a result, growth in the average per household subsidy kept pace with rents and household payments in the project-based program. Although the average voucher rent grew dramatically from 1998 through 2004, our analysis found that this increase was consistent with the growth in the average fair market rent. Fair market rents, which HUD sets for each locality, reflect the cost of modest, standard-quality housing. We created a fair market rent index, weighted by the proportion of voucher households in each locality, and compared it to the average rent for vouchers, which was similarly weighted, in order to assess the change in the average rent for vouchers over time. From 1999 through 2004 (the only years for which complete data on fair market rents and voucher holders were available), the average rent in the voucher program grew by 27 percent, while the average fair market rent grew by 23 percent (fig. 14). Starting in 2003, the average voucher rent increased at a faster rate than the average fair market rent—5 percent versus 4 percent, respectively, in 2003, and 3 percent versus 1 percent, respectively, in 2004—thus narrowing the gap between them. A major reason for the trend in the growth in the average voucher rent was PHAs’ authority to set their payment standard above the applicable fair market rent. As previously noted, each PHA sets a local payment standard up to 110 percent of the fair market rent for their area. The average payment standard as a percentage of the fair market rent has steadily increased, from about 96 percent in 1999 to 103 percent in 2004. Accordingly, the average voucher rent as a percentage of the fair market rent also increased, from about 94 percent in 1999 to 97 percent in 2004. In addition to the contact named above, Steve Westley, Assistant Director; Stephen Brown; Emily Chalmers; Mark Egger; Daniel Garcia-Diaz; John T. McGrail; Marc W. Molino; Rose Schuville; and William Sparling made key contributions to this report.
Annual appropriations for the Department of Housing and Urban Development's (HUD) Section 8 programs--a key federal tool for subsidizing rents of low-income households--have increased sharply in recent years, raising concerns about their cost. Section 8 pays the difference between a unit's rent and the household's payment (generally 30 percent of adjusted income). Section 8 includes a voucher program administered by public housing agencies (PHA) that allows eligible households to use vouchers to rent units in the private market and a project-based program administered by property owners who receive subsidies to rent specific units to eligible households. In both programs, contracts between HUD and the administrators specify the duration and amount of the subsidy. GAO assessed Section 8 trends from fiscal years 1998 through 2004 and examined (1) annual budget authority and outlays for each program; (2) factors that have affected outlays; and (3) the estimated impact of factors, such as market rents, on the average rental subsidy per voucher household. From 1998 through 2004, annual budget authority for Section 8 grew from $9.4 billion to $19.3 billion (105 percent, or 82 percent after adjusting for inflation), while outlays grew from $14.8 billion to $22.2 billion (50 percent, or 33 percent after inflation adjustment). The steep rise in budget authority was partly due to the additional funding needed to cover the cost of renewing long-term contracts. GAO estimates that voucher outlays grew by 93 percent from 1998 through 2004 (71 percent after inflation adjustment), accounting for almost all of the growth in total Section 8 outlays. Estimated project-based outlays grew by 6 percent (and actually declined after inflation adjustment) over this period. GAO estimates that about 43 percent of the growth in voucher outlays from 1998 through 2004 stemmed from policy decisions that increased the number (from 1.6 million to 2.1 million) and use of vouchers, while over half of this growth was due to an increase in the average rental subsidy per household. For the project-based program, a modest increase in the average rental subsidy per household drove the growth in outlays but was partly offset by a reduction of 62,000 in the number of units. On the basis of statistical analysis of cost data, GAO estimates that growth in the average annual rental subsidy per voucher household from 1999 through 2004 is primarily explained by changes in market rents (about one-half of the growth), PHAs' decisions to increase the maximum subsidized rents (about one-quarter), and lagging growth in assisted household incomes (about 16 percent.) Household and neighborhood characteristics, while important cost determinants, did not vary enough to cause a substantial change in the average rental subsidy per household.
We identified three key factors that affect delivery of humanitarian assistance to people inside Syria. First, the increasingly violent and widespread Syrian conflict has hindered effective delivery of humanitarian assistance. Based on our analysis of monthly UNSG reports on the situation inside Syria, as well as interviews with officials providing assistance to Syria based both inside and outside of the country, humanitarian assistance is routinely prevented or delayed from reaching its intended target due to shifting conflict lines, attacks on aid facilities and workers, an inability to access besieged areas, and other factors related to active conflict (see fig. 1). Second, administrative procedures put in place by the Syrian government have delayed or limited the delivery of humanitarian assistance, according to UNSG reports. These reports detail multiple instances of unanswered requests for approvals of convoys, denial or removal of medical supplies from convoys, difficulty obtaining visas for humanitarian staff, and restrictions on international and national NGOs’ ability to operate. As of May 2016, the UNSG reported that 4.6 million people inside Syria are located in hard-to-reach areas and more than 500,000 of those remain besieged by Islamic State of Iraq and Syria, the government of Syria, or non-State armed opposition groups. The UN further reported that in 2015, only 10 percent of all requests for UN interagency convoys to hard-to-reach and besieged areas were approved and assistance delivered. In addition, according to implementing partner officials based in Damascus, Syria, even when these convoys were approved, the officials participating in delivering the assistance were subjected to hours-long delays. Third, due to restrictions, USAID and State staff manage the delivery of humanitarian assistance in Syria remotely from neighboring countries. The U.S. government closed its embassy in Damascus, Syria, in 2012 due to security conditions and the safety of personnel, among other factors. In the absence of direct program monitoring, USAID and State officials noted that they utilize information provided by implementing partners to help ensure effective delivery of assistance and to help their financial oversight, including mitigating risks such as fraud, theft, diversion, and loss. However, USAID officials in the region explained to us that while partners provide data and information, their inability to consistently access project sites—due to factors such as ongoing fighting, bombing raids, and border closures—limited the extent to which partners could obtain and verify progress. Past audit work has shown challenges to such an approach, including cases of partners not fully implementing monitoring practices, resulting in limited project accountability. Further, USAID Office of Inspector General (OIG) has reported that aid organizations providing life-saving assistance in Syria and the surrounding region face an extremely high-risk environment, and that the absence of adequate internal controls, among other challenges, can jeopardize the integrity of these relief efforts and deny critical aid to those in need. State, USAID, and their implementing partners have assessed some types of risk to their programs inside Syria, but most partners have not assessed the risk of fraud. Risk assessment involves comprehensively identifying risks associated with achieving program objectives; analyzing those risks to determine their significance, likelihood of occurrence, and impact; and determining actions or controls to mitigate the risk. In the context of Syria, such risks could include theft and diversion; fraud; safety; security; program governance; and implementing partner capacity risks. Most of the implementing partners in our sample have conducted formal risk assessments for at least one type of risk, especially security risk, and several maintain risk registers that assess a wide variety of risks (see table 1). However, few implementing partners have conducted risk assessments for the risk of fraud (four of nine), or for the risk of loss due to theft or diversion (four of nine). According to GAO’s A Framework for Managing Fraud Risks in Federal Programs, effective fraud risk management involves fully considering the specific fraud risks the agency or program faces, analyzing the potential likelihood and impact of fraud schemes, and prioritizing fraud risks. In addition, risk assessment is essential for ensuring that partners design appropriate and effective control activities. Control activities to mitigate the risk of fraud should be directly connected to the fraud risk assessments and, over time, managers may adjust the control activities if they determine that controls are not effectively designed or implemented to reduce the likelihood or impact of an inherent fraud risk to a tolerable risk level. Although most of the implementing partners in our sample did not conduct assessments of the risk of fraud, there are elevated risks for fraud in U.S. funded humanitarian assistance projects for people inside Syria. According to officials at USAID OIG, they have four ongoing investigations of allegations of fraud and mismanagement related to programs for delivering humanitarian assistance to people inside Syria. Two of the investigations involve allegations of procurement fraud, bribery, and product substitution in USAID funded humanitarian cross- border programs related to procurements of non-food items. One of these investigations found that the subawardee of the implementing partner failed to distribute nonfood items in southern Syria, instead subcontracting the distribution to another organization, but nevertheless billed USAID for the full cost of the project. Additionally, the subawardee was reliant on one individual to facilitate the transfer of materials and salaries, and this individual was involved in the alteration and falsification of records related to the distribution of the nonfood items. According to the USAID OIG, senior leadership at the subawardee was aware of these facts. Further, in May 2016, USAID OIG reported the identification of bid- rigging and multiple bribery and kickback schemes related to contracts to deliver humanitarian aid in Syria, investigations of which resulted in the suspension of 14 entities and individuals involved with aid programs from Turkey. Without documented risk assessments, implementing partners may not have all of the information needed to design appropriate controls to mitigate fraud risks, and State and USAID may not have visibility into areas of risk, such as fraud and loss due to theft and diversion. We found that partners in our sample had implemented controls to mitigate certain risks of delivering humanitarian assistance inside Syria. For instance, many partners in our sample implemented controls to account for safety and security risks to their personnel and beneficiaries receiving assistance. Some partners identified aerial targeting of humanitarian aid workers and beneficiaries at distribution points as a major vulnerability and implemented controls to mitigate this risk, such as distributing goods to beneficiaries on overcast days and making door-to- door deliveries of aid packages. In addition, partners in our sample implemented controls to mitigate risks of fraud and loss within their operations. For example, officials from two implementing partners we interviewed in Amman, Jordan, stated that they conducted spot checks of assistance packages in warehouses to confirm the quantity of the contents and ensure that the quality of the items complied with the terms of the contract. According to another implementing partner, officials from its organization visit the vendor warehouses before signing contracts to verify that U.S. government commodity safety and quality assurance guidelines are met. However, the majority of controls to mitigate risks of fraud and loss were not informed by a risk assessment (see table 2). State and USAID have taken steps to oversee partner programs delivering humanitarian assistance inside Syria; nevertheless, opportunities to assess and mitigate the potential impact of fraud risks remain. U.S. officials cited a variety of oversight activities. For instance, State officials in the region conduct quarterly meetings with partners and collect information on programmatic objectives and on partner programs. State also has enhanced monitoring plans in place with its implementing partners to augment quarterly reporting with information on risks of diversion of assistance. Similarly, USAID officials in Washington, D.C., told us they screen proposals from partners to identify risk mitigation activities and USAID officials in the region noted they maintain regular contact with partners, attend monthly meetings with them, conduct random spot-checks of aid packages at warehouse facilities, and coordinate activities among partners to reduce or eliminate duplication or overlap of assistance. Moreover, according to USAID officials, the USAID OIG has conducted fraud awareness training for officials in the region to improve their ability to detect fraud, such as product substitution, when they conduct spot-checks of aid packages at warehouse facilities. Further, in October 2015, USAID’s Office of U.S. Foreign Disaster Assistance hired a third party monitoring organization to review its projects in Syria. By February 2016, field monitors had conducted site visits and submitted monitoring reports to USAID, providing information on the status of projects and including major concerns that field monitors identified. We found that fraud oversight could be strengthened. Based on our analysis, USAID’s third party monitoring contract and supporting documentation contain guidelines for verifying the progress of activities in Syria; however, they do not clearly instruct field monitors to identify potential fraud risks as they conduct site assessments of projects in Syria. Furthermore, the monitoring plan and site visit templates do not contain specific guidance on how to recognize fraud, and field monitors have not received the USAID OIG fraud awareness training, according to USAID officials. Leading practices in fraud risk management suggest evaluating outcomes using a risk-based approach and adapting activities to improve fraud risk management. This includes conducting risk-based monitoring and evaluation of fraud risk management activities with a focus on outcome measurement and using the results to improve prevention, detection, and response. The monitoring plan associated with the contract contains guidelines for field monitors to document their assessment of the project at the completion of a site visit. However, it lacks specific guidelines to identify potential fraud risks during site visits. Additionally, the templates created by the third party monitoring organization to document site visits instruct monitors to verify the presence or absence of supplies and their quality, among other instructions, but lack specific fraud indicators to alert field monitors to collect information on and identify potential fraud. Furthermore, the monitoring plan contains a training curriculum for field monitors, which has several objectives designed to familiarize them with the protocols, procedures, and instruments used for data collection and reporting. However, the curriculum does not have specific courses for recognizing potential or actual instances of fraud that may occur on site. Given the opportunity for fraud that exists in humanitarian assistance programs, as well as the ongoing USAID OIG investigations, without instructions to specifically collect data on fraud and training to identify it, USAID may be missing an opportunity to assist in its activities to mitigate fraud risks and design appropriate controls. We made several recommendations in our report. To provide more complete information to assist the agencies in conducting oversight activities, State and USAID should require their implementing partners to conduct fraud risk assessments. In addition, USAID should ensure its field monitors (1) are trained to identify potential fraud risks and (2) collect information on them. State and USAID concurred with our recommendations. Chairman Ros-Lehtinen, Ranking Member Deutch, Members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. For further information about this testimony, please contact Thomas Melito, Director, International Affairs and Trade at (202) 512-9601 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are Elizabeth Repko (Assistant Director), Jennifer Young, Kyerion Printup, Justine Lazaro, Cristina Norland, Karen Deans, Kimberly McGatlin, Diane Morris, Justin Fisher, and Alex Welsh. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
This testimony summarizes the information contained in GAO's July 2016 report, entitled Syria Humanitarian Assistance: Some Risks of Providing Aid inside Syria Assessed, but U.S. Agencies Could Improve Fraud Oversight ( GAO-16-629 ). Delivery of U.S. humanitarian assistance to people inside Syria is complicated by three factors including a dangerous operating environment, access constraints, and remote management of programs. Active conflict creates a dangerous environment characterized by attacks on aid facilities and workers, and humanitarian organizations face difficulties accessing those in need. Additionally, U.S. agency officials must manage programs in Syria remotely, increasing risks to the program, including opportunities for fraud. Despite these challenges, according to the U.S. Agency for International Development (USAID), U.S. humanitarian assistance has reached 4 million people inside Syria per month. The Department of State (State), USAID, and their implementing partners have assessed some types of risk to their programs inside Syria, but most partners have not assessed the risk of fraud. Of the 9 implementing partners in GAO's sample of funding instruments, most assessed risks related to safety and security, but only 4 of 9 assessed fraud risks. Such an assessment is important as USAID's Office of Inspector General (OIG) has uncovered multiple instances of fraud affecting U.S. programs delivering humanitarian assistance to Syria. In May 2016, USAID OIG reported that 1 of its active fraud investigations resulted in the suspension of 14 entities and individuals. Given the challenging environment in Syria, fraud risk assessments could help U.S. agencies better identify and address risks to help ensure aid reaches those in need. Partners have implemented controls to mitigate certain risks, but U.S. agencies could improve financial oversight. For example, almost all partners in our sample have controls to mitigate safety risks and some use technology to monitor the transport of goods. Additionally, U.S. agencies have taken steps to oversee activities in Syria, such as quarterly meetings with partners and spot checks of partner warehouses. Further, in October 2015, USAID hired a third party monitor to improve oversight of its activities and help verify progress of its programs. However, the monitors' training curriculum lacks modules on identifying fraud risks. Without such training, monitors may overlook potential fraud risks and miss opportunities to collect data that could help USAID improve its financial oversight.
In 1988, Congress enacted the Military Whistleblower Protection Act to provide protection to servicemembers who report wrongdoing within DOD. According to DOD policy, a military whistleblower is a servicemember who makes, prepares to make, or is perceived as making or preparing to make a protected communication—that is, a report of a violation of law or regulation, gross waste of funds, or abuse of authority, among others, to an authorized individual or organization. An authorized individual includes, among others, a Member of Congress, an IG, and any person or organization in the servicemember’s chain of command. Further, any lawful communication to a Member of Congress or IG is Reprisal occurs when a responsible management official protected.takes, threatens to take, withholds, or threatens to withhold a personnel action because a servicemember made or was preparing to make a protected communication. A personnel action is any action taken on a servicemember that affects or has the potential to affect a servicemember’s current position or career, such as an adverse performance evaluation, letter of reprimand, or separation from service, among others. Servicemembers and former servicemembers may submit reprisal complaints to DODIG or an IG within DOD. In 2013, Congress expanded the time for servicemembers to file a reprisal complaint from 60 days to 1 year following the date on which the servicemember becomes aware of the personnel action. While the law affords military whistleblowers certain protections, those who allege they have suffered reprisal generally do not receive relief from the alleged reprisal until DOD has completed an investigation and substantiated the claims of reprisal. DODIG can conduct an investigation into a military reprisal complaint or refer the investigation to the appropriate service IG; however, according to DOD policy, no determination is complete without final approval from DODIG. Whistleblower Reprisal Investigations is the directorate within DODIG’s Administrative Investigations component that is responsible for conducting and overseeing investigations of reprisal and restriction complaints filed by servicemembers. According to DOD policy, the Whistleblower Reprisal Investigations directorate is to approve service IG recommendations to dismiss cases, review and approve the results of investigations conducted by the service IGs, and initiate follow-up investigations to correct any inadequacies in service IG investigations. The majority of DODIG’s investigation workload for military reprisal cases is related to oversight reviews of investigations conducted by the service IGs. The directorate is also responsible for investigating reprisal complaints filed by DOD civilian employees, and employees of DOD contractors and subcontractors, among others. According to DODIG’s semiannual reports to Congress, military whistleblower reprisal complaints account for approximately 60 percent of the reprisal complaints it receives. Figure 1 provides a summary of the investigation process, as described in DODIG guidance. According to DODIG’s Guide to Investigating Military Whistleblower Reprisal and Restriction Complaints, DODIG and service IG investigators are to assess reprisal complaints by answering four questions to determine whether the elements of reprisal are present. Specifically: 1. Did the servicemember make or prepare to make a protected communication, or was the servicemember perceived as having made or prepared to make a protected communication? 2. Was an unfavorable personnel action taken or threatened against the servicemember, or was a favorable personnel action withheld or threatened to be withheld, following the protected communication? 3. Did the responsible management official have knowledge of the servicemember’s protected communication or perceive the servicemember as making or preparing to make a protected communication? 4. Would the same personnel action have been taken, withheld, or threatened absent the protected communication? During the complaint intake process, the investigator is to review the complaint and timeline and conduct an interview with the servicemember to determine whether (1) the servicemember made or prepared to make a protected communication and (2) a responsible management official took a personnel action against the servicemember. The investigator is to also assess whether the allegation supports an inference that the responsible management official had knowledge of the protected communication and suggests a causal connection between the protected communication and the personnel action, such as whether the personnel action closely followed the protected communication. If the investigating officer determines there was no protected communication, no personnel action, or no inference of responsible management official knowledge or causation, the investigating officer can recommend that DODIG dismiss the case. If a servicemember’s complaint contains a personnel action and a protected communication, and an inference of knowledge and causation, the case is to proceed to a full investigation, according to DODIG guidance.official would have taken the personnel action if the servicemember had not made a protected communication, the investigating officer is to determine the official’s reasons for taking the action, the timing between the protected communication and the personnel action, the official’s motive, and whether the servicemember was treated differently than other servicemembers who did not make protected communications. The investigating officer is to determine the case outcome based on a “preponderance of the evidence,” defined by DODIG as the degree of relevant evidence that a reasonable person, considering the record as a whole, would accept as sufficient to find that a contested fact is more likely to be true than untrue. When determining whether the responsible management According to DODIG guidance, if the investigating officer finds: (1) that the servicemember made a protected communication, and that the responsible management official (2) took a personnel action against the servicemember following the protected communication, (3) had knowledge of the protected communication, and (4) would not have taken the personnel action without the protected communication, the investigator writes a report that substantiates the reprisal complaint. After the investigator completes the report, it is subject to DODIG supervisory and managerial review and approval, as well as a legal sufficiency review. If the investigation is conducted by a service IG investigator, the service IG headquarters reviews and forwards the report to DODIG for oversight and final approval. In cases where DODIG substantiates a reprisal complaint, the servicemember may take an additional step to petition the appropriate Board for the Correction of Military Records for relief from the personnel action. DOD did not meet statutory notification requirements to inform servicemembers about delays in investigations for about half of military whistleblower reprisal investigations in fiscal year 2013. Further, in the notifications that DOD sent, reasons about the delays were general in nature and projected report completion dates were, on average, significantly underestimated. In addition, DOD rarely met internal timeliness requirements for completing military whistleblower reprisal investigations within 180 days for cases that it did not dismiss at intake. The average length of an investigation during fiscal years 2013 and 2014 was almost three times the DOD requirement. According to 10 U.S.C. § 1034 if, during the course of the investigation, the IG determines that it is not possible to submit the report of investigation to the Secretary of Defense and the service Secretary within 180 days after the receipt of the allegation, the IG shall provide to the Secretary of Defense, the service Secretary concerned, and the servicemember making the allegation a notice of that determination including the reasons why the report may not be submitted within that time and an estimate of the time when the report will be submitted. DODIG considers its office to be in accordance with the statute as long as it either completes the investigation within 180 days or submits a letter to the servicemember within 180 days, according to a senior DODIG official. In February 2012, we found that DODIG officials acknowledged that they and the service IGs had not been making the required notifications, but that they were taking steps to ensure that they met statutory notification requirements. For example, in February 2012, DODIG issued policy guidance to the service IGs reemphasizing the statutory requirement to notify servicemembers if investigations are not completed within 180 days.are to determine whether the service IG sent the 180-day notification letter as part of DODIG’s oversight review of service IG-investigated Further, according to oversight investigators we spoke with, they cases and in fiscal year 2013, it was included as an item on DODIG’s oversight worksheet for oversight investigators to look for during their oversight review. DODIG officials stated that they have taken additional action to ensure they meet statutory notification requirements since fiscal year 2013, which was the time frame covered by our case-file review. Specifically, in fall 2013, DODIG assigned an oversight investigator to periodically reconcile 180-day notification letters with the service IGs to ensure that the service IGs have sent the required letters and that DODIG has received a copy, according to DODIG officials. In addition, DODIG developed a mechanism in its case management system to indicate which cases are older than 180 days. However, DOD officials told us they have not developed a tool, such as an automated alert, to proactively ensure that they are in compliance with the statutory 180-day notification requirement. On the basis of our file review of a stratified random sample of 124 cases closed by DODIG in fiscal year 2013, we found that DOD has made improvements related to these reporting requirements and that some case files that required letters contained evidence that DOD had sent the letters. However, we estimate that about 47 percent of the files for cases that DOD took longer than 180 days to close in fiscal year 2013 did not contain evidence that the investigating IG sent the required letters to servicemembers. In addition, we found that in cases in which DODIG or the service IG sent the required letter, it typically did so after the case had reached the 180-day mark. Based on our file review, we estimate that for cases in which DODIG or the service IG sent a 180-day notification letter to the servicemember to explain the delays in the investigation, the median notification time was about 353 days after the servicemember filed the complaint. In some service investigations, the investigating IG did not send the required letter to the servicemember until it forwarded the report of investigation to DODIG for review, more than 1 year after the servicemember filed the complaint. Further, the letters that DOD sent provided general reasons for the delay, but, on average, significantly underestimated the date by which it would complete the investigation. For example, reasons for the investigation delay included case complexity, case volume, and delays that the service IG experienced in coordinating information, witnesses, and testimony. Based on the results of our file review, we estimate that the median time for case completion stated by DODIG and the service IGs in the letters, which they sent, on average, around 353 days into the investigation, was However, we estimate that for cases in about an additional 78 days.which the investigating IG sent the required letter, the median time for case closure was actually 488 days, 57 days past the stated estimate for case completion. Service IG officials stated that, for most cases over 180 days, they provide a standard estimate for case completion because it is difficult to estimate the amount of time required for case completion due to the unique characteristics of each case and the number of layers of review prior to case closure. According to federal standards for internal control, an agency must have relevant, reliable, and timely communications relating to internal and external events in order to determine whether the agency is achieving its compliance with various laws and regulations. On the basis of our file review, we estimate that, on average, the notifications present in 53 percent of investigations closed in fiscal year 2013 in which they were required were untimely and contained unreliable estimates. Figure 2 shows the median notification timeframes and estimates for case completion for fiscal year 2013 cases over 180 days. GAO, Standards for Internal Control in the Federal Government, GAO/AIMD-00-21.3.1 (Washington, D.C.: November 1999). DOD rarely met internal timeliness requirements for completing military whistleblower reprisal investigations in fiscal years 2013 and 2014. According to DOD Directive 7050.06, which implements the statute 10 U.S.C. § 1034 and establishes DOD policy, DODIG shall issue a whistleblower reprisal investigation report—containing a thorough review of the facts and circumstances, relevant documents acquired, and summaries or transcripts of interviews conducted—within 180 days of the receipt of the allegation of reprisal. We found that the average investigation time for all cases that DOD (that is, both DODIG and the service IGs) investigated and closed in fiscal years 2013 and 2014 was 526 days. The average length of DODIG- investigated cases closed in fiscal years 2013 and 2014 was 443 days. The average length of service IG–investigated cases during this time was 530 days, which is almost three times DOD’s internal timeliness requirement. For cases DODIG dismissed after completing the complaint intake process, the average processing time was 48 days. See table 1 for details regarding case-processing times for cases closed by DODIG and the service IGs in fiscal years 2013 and 2014. In our total timeliness calculations for all DOD investigations, we did not include complaints that DODIG or the service IGs dismissed at intake because the IG determined that the complaint did not have sufficient evidence to warrant an investigation. While the statute requires the service IG receiving the reprisal allegation to promptly notify DODIG of the allegation, services do not consistently provide DODIG notification when they receive complaints that do not contain a protected communication or personnel action, according to service IG officials and guidance. Specifically, the Air Force IG’s guidance states that DODIG must be notified when a complaint contains an allegation of reprisal. However, the guidance states that a complaint does not contain a reprisal allegation unless the first two elements of reprisal—a protected communication and a personnel action—are present. DODIG officials stated that any service determination that a complaint does not meet its first two elements of reprisal must be submitted to DODIG for oversight. However, officials from the two service IGs, which accounted for approximately 80 percent of the service IG reprisal investigative workload in fiscal years 2013 and 2014 told us that they do not track or report to DODIG complaints that they dismiss at intake because they lacked a protected communication or personnel action. Since DODIG does not have data on cases that the services dismiss at intake, because the services do not notify them of these cases, we did not have data on all cases that were dismissed at intake. Therefore we reported the timeliness of cases that DODIG dismissed at intake separately, and did not include them in our overall timeliness calculations. In fiscal years 2013 and 2014, DODIG investigated and closed a total of 39 cases and dismissed another 375 complaints after completing the intake process. The service IGs closed a total of 674 cases during this period. See table 2 for the number of cases closed by each investigating organization in fiscal years 2013 and 2014. DOD received a total of 640 reprisal complaints in fiscal year 2013 and 584 reprisal complaints in fiscal year 2014. As of September 30, 2014, DODIG and the service IGs had a total of 822 open military whistleblower reprisal cases. While the majority of these open cases were filed from fiscal years 2012 through 2014, some of these cases had been open since fiscal year 2008. We found that almost 20 percent of DOD’s open military reprisal cases were filed in fiscal year 2012 and had been open for at least 2 years. Further, approximately 33 percent of the open military reprisal cases were filed in 2013 and had been open for at least 1 year. Table 3 provides additional information on DOD’s open military reprisal cases and when the servicemembers filed their reprisal complaints. Appendix II provides information about substantiation rates and the general characteristics of military whistleblower reprisal cases. DOD officials described several factors affecting the timeliness of military reprisal investigations and stated that they are taking steps to improve investigation timeliness. For example, in addition to investigations, DODIG’s workload includes completing the intake process for complaints filed with DODIG. Intake requires staff to review complaints and determine whether there is sufficient evidence for those complaints to warrant an investigation. As we stated previously, DODIG dismissed 375 complaints after completing the intake process in fiscal years 2013 and 2014. Further, service IG officials indicated that the decentralized investigation structure is a factor that affects the timeliness of their investigations. For example, service IGs assign investigations to field- level investigators, which, according to officials, results in a multilayer review process as the investigation is reviewed by each organizational level of the service with each layer of review adding to case-processing times. Additionally, all six field-level service investigators we interviewed stated that, in their opinion,180 days was not a reasonable amount of time to complete all investigations unless an investigator has no competing responsibilities and is able to focus solely on one reprisal investigation at a time. Service IG investigators further stated that in addition to competing responsibilities, the complexity of cases, the volume of cases, and low staffing numbers all affect the timeliness of investigations. We found in February 2012 that DODIG also identified staffing shortages as a factor affecting the timely processing of cases and that staffing levels had not kept up with an increased reprisal caseload. DODIG officials stated that they have increased their personnel levels to accommodate the increased caseload. Specifically, DODIG’s Whistleblower Reprisal Investigations directorate increased from 30 staff in January 2012 to 53 staff in March 2015. Further, DODIG officials stated that DOD leadership has made improving the timeliness of administrative investigations—which include both investigations of whistleblower reprisal and of other allegations made against senior officials—a priority. Specifically, in an effort to improve the timeliness of senior official investigations, including senior official whistleblower reprisal cases, DODIG convened a timeliness task force in coordination with the service IGs, which issued a report with recommendations in November 2014. Specifically, the task force recommended that DODIG expand its case management system to track and manage the timeliness of senior official investigations. DODIG officials stated that they believe the expansion of the case management system will improve timeliness for all reprisal investigations. Following the issuance of the task force’s report, in January 2015, the Deputy Secretary of Defense issued a memorandum endorsing the findings of the report, specifically stating that the service IGs should not impose any staffing reductions on the investigation offices because they must be adequately resourced when faced with multiple high-priority investigations. GAO-12-362. the final outcome. DODIG officials stated that they use this metric to track timeliness for service IG reprisal investigations. However, according to officials, this calculation is inaccurate for cases opened prior to the case management system being implemented in December 2012, which accounts for approximately 24 percent of open investigations. Specifically, based on the results of our file review, we estimate that the timeliness metric in DODIG’s case management system underestimates total case time for each case closed in fiscal year 2013 by at least 26 days on average, which limits DODIG’s ability to monitor the timeliness of all service IG investigations. DODIG officials stated that they are able to identify the cases that are affected by the inaccurate timeliness metric and that they have implemented processes to manually calculate the case age for these cases. Further, as we discuss later in this report, DOD has not implemented procedures to ensure accurate and complete recording of total case- processing time. DODIG collects timeliness information but cannot analyze the data to identify potential reforms because the case management system is under development and has limited reporting capabilities. In addition, the service IGs have separate case management systems; therefore the timeliness of all service investigative phases is not maintained in DODIG’s case management system, which does not allow DODIG to consistently track all case processing times. Finally, DODIG responds to ad hoc congressional requests related to investigation timeliness, but does not include overall timeliness information in its semiannual reports to Congress, as we recommended in February 2012. We continue to believe these recommendations are valid and should be implemented. DODIG implemented a new whistleblower reprisal investigation case management system to improve its monitoring of investigations; however, as of March 2015, the system is under development and has limited reporting capabilities. In addition, DODIG has provided its staff with limited user guidance on how to use and record information in the case management system. Further, DOD’s use of multiple case management systems hinders its visibility over total workload and investigative activity at the service IG level, such as the number and status of military whistleblower reprisal investigations in process at the service IGs. DOD’s planned expansion of its reprisal case management system to the service IGs may not result in improved visibility over its workload without further planning and guidance. In February 2012, we found that DOD’s efforts to improve case processing-times had been hindered by unreliable and incomplete data, and, as previously discussed, we recommended that DOD implement policies and procedures to ensure accurate and complete recording of In case-processing time. DOD concurred with this recommendation.December 2012, DODIG took steps to improve its military whistleblower reprisal investigation data by implementing a new case management system to monitor its administrative investigations, including senior official and whistleblower reprisal investigations. We found the data from this case management system reliable for our purposes of reporting the average lengths of investigations for this report—an improvement since February 2012, when we reported that similar data from DODIG’s previous system were not reliable for our reporting purposes. According to DODIG, the case management system is intended to streamline processing, investigations, and service IG oversight reviews by serving as an automated, real-time complaint tracking and investigative management tool that electronically stores all case-file documentation. However, as of March 2015, the case management system was under development and according to officials has limited reporting capabilities. According to a DODIG official, DOD selected an incremental process to develop the case management system in order to incorporate user feedback into each phase of development, and, in accordance with this type of development, in December 2012 DODIG staff began using the case management system prior to the completion of the system. DODIG officials stated that they had planned to finish the development of the case management system by February 2014; however, according to these officials, DODIG delayed funding for the final development phase until fiscal year 2015, delaying the completion of the case management system. As a result of the delayed funding, DODIG has not been able to incorporate all user feedback to ensure that the case management system is fully functioning at the desired level, according to DODIG officials. For example, according to DODIG officials, the case management system’s reporting capabilities are limited. The case management system contains the fields necessary to track the length of various investigative phases for DODIG investigations as we recommended in February 2012, such as the dates for the legal and internal review processes, but according to a DODIG official, it cannot track this information for service IG investigations. In addition, the case management system contains dashboards for users to manage cases by the whistleblower statutes for which DODIG is responsible. For example, users can view the dashboards to determine the number of investigations or oversight cases assigned to a particular investigator and the number of DODIG investigations over 180 days, among other things. DODIG can also determine the length of time it took to complete these phases when users drill down to individual cases and review key dates for these phases in the investigation and oversight events tabs. However, according to DODIG officials, DODIG is not able to extract and aggregate these data from its case management system for analysis and reporting purposes, which would allow it to identify possible areas for implementing case-processing reforms, as we recommended in February 2012. DODIG officials stated that even though they have not completed the final development phase of the case management system, using the system has improved their ability to provide oversight of the service IG investigations, allowing them to track the corrective actions that services have taken in substantiated reprisal cases. Officials stated they can also calculate overall case age, the number of days to complete the intake phase, the number of days to complete the investigation phase, and the number of days in oversight, in response to findings in our previous report. However, DODIG can calculate these milestones only for cases that it investigates, which is a small portion of the military reprisal investigations on which this report focuses. In addition, as we previously stated, we found the case management system’s field to calculate case age was inaccurate because it underestimates total case time for cases opened in the prior system and closed in fiscal year 2013 by at least 26 days on average. Further, according to DODIG officials, DODIG spent approximately $2.22 million on the development of the case management system as of February 2015, and plans to spend approximately $1.4 million to further develop the case management system prior to the end of fiscal year 2015. DODIG officials stated that they plan to complete the final phase of case management system development, which includes improvements to reporting capabilities, by the end of fiscal year 2015. Other needed improvements include restrictions on which cases users can access and edit, as well as additional fields to better track specific types of case outcomes, such as cases withdrawn by servicemembers, according to DODIG officials. However, DODIG officials stated that they are unsure of the extent to which they will be able to make improvements to the case management system during the next phase of development given their current funding levels. As a result, DODIG officials stated that they have initiated a process to prioritize the improvements based on necessary and desired changes. DODIG has provided limited guidance to case management system users on how to populate case information into the new whistleblower reprisal case management system. DODIG investigators have been using the case management system to manage reprisal investigations since December 2012. As previously discussed, according to officials, DODIG planned to finish the final development phase for its case management system in February 2014, but changed that benchmark to September 2015. According to an official, when the case management system was implemented, DODIG internally developed and provided its staff with a user manual. According to oversight investigators, guidance on the case management system is limited and does not include detailed operating instructions, such as the type of information to enter into the case notes fields. Further, one oversight investigator stated that the guidance DODIG provided before the system was implemented was minimal and included features of the system that were not yet operable. DODIG officials provided documentation of two types of guidance, a draft user manual created by Whistleblower Reprisal directorate staff with screen captures of the system, and desk aids for various staff positions that provide descriptions of the data fields the investigators are to complete. DODIG officials noted that they have issued several versions of the desk aids since they implemented the case management system. During our case file review, we found that DODIG investigators had incorrectly coded some cases in the case management system as fully investigated when the service IG had dismissed the case prior to a full investigation. Based on the results of our file review, we estimate that, in fiscal year 2013, about 43 percent of cases that DODIG investigators coded as fully investigated were incorrectly coded in this way. Due to these miscoded cases, we are unable to report on the number of military whistleblower reprisal complaints that DOD fully investigated in fiscal years 2013 and 2014. In its semiannual reports to Congress, DODIG reports on the number of military whistleblower reprisal investigations fully investigated by DODIG and the service IGs. DODIG officials stated that they use their case management system to compile information for these semiannual reports. Based on our estimate of the number of cases affected by the miscoding in fiscal year 2013, DODIG may have mischaracterized its investigative work in its fiscal year 2013 semiannual reports to Congress. DODIG officials stated that they were aware that DODIG staff had improperly coded some reprisal cases as fully investigated when they were dismissed prior to a full investigation, but that they were not aware of the extent of the miscoding. Further, DODIG officials stated that they are taking steps to ensure that future cases are coded properly. For example, DODIG officials said that once they realized that DODIG staff were coding cases incorrectly, they provided desk aids to users in March 2014 that describe how to code cases that were fully investigated and those that were dismissed prior to a full investigation. However, during our case-file review we found that DODIG staff were still coding cases incorrectly as of April 2014. Further, in September 2013, DODIG assigned an Investigations Analyst to monitor its whistleblower reprisal investigations data. According to DODIG officials, the Investigations Analyst uses a dashboard in the case management system which helps identify missing data or entry errors, and then manually corrects them. As previously discussed, DODIG’s case management system is to serve as a real-time complaint tracking and investigative management tool for investigators within its Administrative Investigations component. Further, DODIG’s fiscal year 2014 performance plan for oversight investigators notes that investigators should ensure the case management system reflects current, real-time information on case activity. However, based on our file review of a sample of 124 cases closed in fiscal year 2013, we found that DODIG investigators were not using the case management system for real-time case management as intended by DODIG officials. Specifically, we estimate that DODIG personnel uploaded key case documents to the case management system after DODIG had closed the case in 77 percent of cases closed in fiscal year 2013. For example, DODIG staff uploaded, among other things, reports of investigation, oversight worksheets, 180-day letters, and copies of the servicemembers’ complaints after the case had already closed, indicating that the case management system was not being used for real-time case management at that time. Further, we estimate that, for 83 percent of cases closed in fiscal year 2013, DODIG staff made changes to the case variables in the case management system in 2014, at least 3 months after case closure. For cases where DODIG made changes to the data, we estimate that about 68 percent had significant changes, such as changes to the date the servicemember filed the complaint and the organization that conducted the investigation, as well as the result code, which indicates whether the case was fully investigated. In explaining why the changes were made, DODIG officials stated that leadership from DODIG’s Whistleblower Reprisal Investigations directorate instructed oversight investigators and other DODIG staff to verify and correct the data as necessary for all cases closed in fiscal years 2013 and 2014 by comparing case management system data to case file documentation. DODIG officials stated that this was necessary to ensure the reliability of DODIG’s investigative data because the case management system was new to investigators and they had not been consistently recording information. Further, officials stated that prior to the implementation of the case management system, investigators reviewed hard-copy case files of service IG investigations and they did not immediately transition to reviewing case files electronically when the case management system was implemented in December 2012. The guidance DODIG has issued for the new case management system does not include instructions that the staff are to use the system for real-time case management and investigation review or which types of events to record, both of which could have helped guide the transition from hard-copy to electronic case file review. DODIG officials stated that they plan to further develop their draft manual for the case management system expansion to the service IGs which they anticipate will be complete by the end of fiscal year 2016, as discussed later in the report. Officials further stated they will continue to update internal desk aides, which contain only descriptions of the case management system’s fields, as needed, but do not plan to issue additional internal guidance for DODIG staff on the case management system because they believe that the current guidance is sufficient. However, DODIG’s draft user manual does not instruct users on how to access the system, troubleshoot errors they may encounter, or monitor their caseloads using the case management systems dashboards. Further, DODIG’s Administrative Investigations manual, which provides guidance to the Whistleblower Reprisal Investigations directorate staff, is outdated because it refers only to DODIG’s prior case management system, which was replaced in December 2012. According to CIGIE quality standards for investigations, accurate processing of information is essential to the mission of an investigative organization. It should begin with the orderly, systematic, accurate, and secure maintenance of a management information system. Written guidance should define the data elements to be recorded in the system. Further, management should have certain information available to perform its responsibilities, measure its accomplishments, and respond to DODIG officials stated that requests by appropriate external customers.they plan to develop a user manual when they expand the case management system to service IGs, as discussed later in the report. Without updating and finalizing the internal user guidance from 2012 as necessary until the case management system is complete, including providing instructions on how to use the system as a real-time tracking system in the meantime, DODIG will continue to face challenges in its ability to report on the military whistleblower reprisal program. For example, unless investigators update and upload case information during the course of an investigation, DODIG will be unable to report on the real- time status of investigations and therefore may not be able to respond to congressional requests for case information without significant efforts. Further, DOD uses the case management system to compile information for reporting to Congress on its military reprisal investigation workload and thus may have inaccurately represented its workload—the number of cases fully investigated—to Congress in its semiannual reports. Without updating and finalizing internal guidance on how to correctly enter case information into the case management system, DODIG cannot ensure the reliability of its data without manually reviewing and correcting each case. Each service IG conducts and monitors the status of military whistleblower reprisal investigations in a different case management system. Although DODIG has access to one of the service’s case management systems, according to officials DODIG does not have complete visibility over service investigations from complaint receipt to investigation determination. As a result, DODIG may not know that some servicemembers have filed reprisal complaints until the service IGs forward the completed reports of investigation to DODIG for review. Further, DODIG does not have knowledge of the real-time status of service-conducted investigations and is unable to anticipate when service IGs will send completed reports of investigation for review, according to officials. DODIG is required to review all service IG determinations in military reprisal investigations in addition to its responsibility for conducting investigations of some military reprisal complaints. Without a common system to share data, DODIG’s oversight of the timeliness of service investigations and visibility of its own future workload is limited. Our analysis indicates that DODIG’s case management system did not have record of at least 22 percent of service investigations both open as of September 30, 2014, and closed in fiscal years 2013 and 2014. According to DOD officials, DOD’s decentralized structure for military reprisal investigations, paired with the fact that servicemembers can submit complaints to DOD or their respective service IG, or their chain of command, contributes to the possibility of duplicate complaints or that one IG fails to notify another of an ongoing reprisal investigation. According to DOD Directive 7050.06, when the service IGs receive reprisal complaints from servicemembers, those offices are required to notify DODIG within 10 days; however, based on our file review, we estimate that there was no evidence of this required notification in 30 percent of cases closed in fiscal year 2013 where the servicemember In response, DODIG officials filed the complaint with the service IG.noted that one of their oversight investigators was assigned to reconcile DODIG’s open military reprisal investigations with the service’s open reprisal investigations in fall 2013. According to service IG officials, this reconciliation is conducted at various points throughout the year by manually comparing lists of investigations from each IG’s respective case management system. Through our analysis we identified challenges reconciling DODIG and services IG cases because the investigating organizations do not share a common case identifier. In addition, in fiscal years 2013 and 2014 each investigating organization did not consistently track the other organization’s unique case identifier. DODIG officials stated that they have since taken steps to ensure that DODIG tracks the service IGs’ case identifiers in its case management system. For example, the oversight investigator that DODIG assigned to reconcile cases updates service case identifiers in DODIG’s case management system as part of the reconciliation process. Further, service IG officials stated that there have been instances where DODIG did not notify them that it was investigating a reprisal complaint from one of their servicemembers and they did not find out about the investigation until after DODIG had conducted the investigation. Standards for internal control in the federal government state that, for an entity to run and control its operations, it must have relevant, reliable, and timely communications relating to internal and external events. DOD is taking steps to improve its visibility over service investigations. In November 2014, a DODIG task force that focused on improving the timeliness of DOD’s senior official investigations recommended that DOD expand the case management system to the service IGs as a way to improve investigation timeliness. According to DODIG officials, expanding the case management system is also an effort to improve DODIG’s visibility of administrative investigations conducted by the service IGs. In January 2015, the Deputy Secretary of Defense endorsed the recommendation to expand the case management system to the service IGs, stating that an enterprise data system is essential to achieving a more seamless and efficient processing of complaints and investigations across the department. that they plan to expand the case management system to the service IGs by the end of fiscal year 2016. Deputy Secretary of Defense Memorandum, Report on Task Force to Improve Timeliness of Senior Official Investigations. However, DODIG does not have an implementation plan for the expansion and has not yet taken steps to develop one. According to DODIG officials, they are in the process of developing a strategy to expand the case management system and are in the early stages of the planning process. DODIG officials stated they have set an aggressive time frame for the expansion because leadership has made investigation timeliness a priority and they believe a common case management system is part of the solution. Officials stated that they have completed the process to classify the case management system as a defense business system in April 2014 and that DODIG has been using the system to process all whistleblower reprisal investigations since December 2012. Further, officials stated that they developed a working group comprising representatives of each of the service IGs to facilitate planning for the expansion. The working group held its first meeting in February 2015, and plans to meet bimonthly until the expansion is complete. A DODIG official tasked with leading the expansion of the case management system stated that he intends to refer to best practices for project management to help facilitate the planning process for this expansion project. The Project Management Institute’s Guide to Project Management Body of Knowledge (PMBOK® Guide) provides guidelines for managing individual projects, including developing a project management plan. A project management plan defines the basis of all project work, including how the project is executed, monitored and controlled, and closed. According to the PMBOK® Guide, project management plans should include a scope—to describe major deliverables, assumptions, and project constraints—project requirements, schedules, costs, stakeholder roles and responsibilities, and stakeholder communication techniques, among other things. Further, project management plans are to be updated when issues are found during the course of the project, which may modify project policies or procedures, and when actions are needed to forestall negative effects on the project. Project management plans also include methods to define and document stakeholder needs. According to the Project Management Institute, detailed requirements documentation is essential for stakeholders to understand what needs to be (1) done to deliver the project and (2) produced as the result of the project. DODIG officials stated that, in coordination with the service IGs, they will review and incorporate some needs of each service IG prior to expanding the case management system, but they do not plan to fully customize the case management system for each service IG, such as developing a different interface for each service. Service IG officials expressed concerns that they have requirements, such as specific data fields and report capabilities to meet leadership needs to be incorporated into the case management system prior to expansion. For example, service IG officials stated that it is important that case management system user roles are defined in a way that reflects how their organizations operate and that case access is restricted according to the organizational level of the user. Some service IG officials stated that they are concerned that DODIG will expand the case management system without incorporating all of their needs and that they will not be able to meet their respective service leaderships’ reporting requirements as a result. These officials stated that if DODIG’s case management system does not meet their needs they may need to continue to use their current case management systems, which would be duplicative. Given DOD’s stated plans to expand the case management system to the service IGs by the end of fiscal year 2016, doing so without developing an implementation plan that addresses the needs of DODIG and the service IGs, and defines project goals, schedules, costs, stakeholder roles and responsibilities, and stakeholder communication techniques, puts DODIG at risk of creating a system that will not improve its visibility over total workload or investigation timeliness. Further, without such a plan, DODIG may not be well-positioned to monitor the expansion and measure project success. In addition, without developing a plan in coordination with the service IGs that defines the roles and responsibilities of all stakeholders, and sets expectations for communication, DODIG may not be able to balance all stakeholder needs and interests. Further, as previously discussed, DODIG has not completed the development of the case management system and it does not meet DODIG user needs. Finally, in the absence of an implementation plan that adequately addresses the requirements of the service IGs, the service IGs may not know whether or when their needs will be met and as a result they may unnecessarily continue to use their own systems, which could be duplicative. In 2011, DOD designated a team in DODIG’s Directorate for Whistleblower Reprisal Investigations to review service-conducted investigations on a full-time basis; however, DODIG has not formalized the process for the review of military whistleblower reprisal investigations. For example, it is unclear to what extent DODIG has incorporated the relevant investigative standards into its process. Several factors affect the quality of DOD’s oversight of service-conducted military whistleblower reprisal investigations, including the absence of standardized guidance and DODIG feedback to the service investigators. Finally, DOD does not have a tool for investigators to certify their independence to ensure its military whistleblower reprisal investigations are objective in fact and appearance. In September 2011, DODIG took steps to improve its oversight of service IG investigations by establishing an investigator team that is solely dedicated to the oversight review of service IG-conducted military reprisal investigations, according to officials, but it has not formalized its process by providing detailed guidance to its oversight team. DODIG is responsible for reviewing and approving service determinations regarding whistleblower reprisal complaints, including both (1) service determinations that an investigation into a reprisal complaint is not warranted, and (2) the results of completed service reprisal investigations. To improve oversight, DODIG officials said that they staffed the team with investigators who had experience at either DOD or service IGs. The oversight investigators are to document their review using an oversight worksheet, which captures information about how the service investigation was conducted as well as the investigation’s findings and conclusions. DODIG has used various versions of this oversight worksheet since it established the oversight team. Our case-file review included case files closed in fiscal year 2013, and during this period DODIG’s oversight worksheet was designed to capture information about (1) the servicemember’s allegations of reprisal, (2) the personnel action or actions taken against the servicemember, (3) service investigation thoroughness, (4) documentation, (5) timeliness, (6) objectivity, and (7) whether there were any deficiencies or inconsistencies in the service investigation report, among other things. DODIG adheres to CIGIE standards, but the extent to which it incorporates these standards is unclear, and service IGs are not members of CIGIE. CIGIE’s Quality Standards for Investigations provide a framework to help ensure high-quality investigations are conducted by member IG offices. CIGIE’s general standards apply to investigative organizations and include investigator qualifications, independence, and due professional care. CIGIE’s qualitative standards relate to how the investigation is planned, executed, and reported, as well as how the investigative information is managed. As a CIGIE member, DODIG is expected to incorporate CIGIE’s quality standards into its operations manuals or handbooks. Table 4 highlights some of the CIGIE standards that DODIG has incorporated into its oversight worksheet that investigators use to review service IG investigations. We found that DODIG’s attestation to CIGIE standards, which is part of its oversight review, was inconsistent. For example, DODIG has changed the language on versions of its oversight worksheet between 2012 and 2014, and DODIG oversight investigators did not always attest to whether the investigations in our fiscal year 2013 sample were conducted in accordance with CIGIE standards. As a member of CIGIE, DODIG must develop and document its quality-control policies and procedures in accordance with its agency requirements, then communicate those policies and procedures to its personnel, according to CIGIE standards. The oversight worksheet that DODIG was using as of March 2015 did not contain a block for CIGIE attestation, to indicate whether the investigation was conducted in accordance with CIGIE standards, but the worksheet asks whether the investigator gathered all relevant evidence and whether the investigator demonstrated IG impartiality during interviews. In contrast, the oversight worksheet that DODIG oversight investigators used during the fiscal year 2013 time frame of our sample contained template language for the oversight investigator to indicate whether the service conducted the investigation in accordance with CIGIE standards, but the worksheets in our sample did not consistently attest to whether the approved investigation adhered to CIGIE standards, and the basis for the determination was unclear. Specifically, of the 89 service IG investigations in our sample, DODIG oversight investigators attested that 55 percent of them were conducted in accordance with CIGIE standards as reflected on the oversight worksheet. Further, the service IGs are not members of CIGIE, and the service IG investigators are not subject or consistently trained to CIGIE standards. In 2012 DODIG hired a training officer and in 2013 developed a basic whistleblower reprisal investigations course for DODIG and service IG investigators. DODIG officials stated that they incorporated some CIGIE standards into this and other trainings as well as in their semiannual symposiums, but service IG officials stated that these DODIG-offered trainings do not reach all field-level investigators. A senior DODIG official stated that even though the service IGs are not subject to CIGIE standards, DODIG would not approve a service IG investigation that did not appear to adhere to CIGIE standards. Also, while DODIG’s Administrative Investigations manual directs DODIG investigators to follow CIGIE standards, none of the DODIG-conducted military reprisal investigations in our sample included an attestation similar to the statement on the oversight worksheet for service IG cases stating they adhered to CIGIE standards. DODIG officials stated that the attestation is not necessary for its own reprisal investigations because, as a CIGIE member, all of its investigations adhere to CIGIE standards. DODIG provided the oversight team with limited instructions on how to review service IG cases. We interviewed each member of DODIG’s oversight team to discuss their procedures for investigation review and found that they have different approaches for how they review investigations prior to completing the oversight worksheet. For example, some read the allegation of reprisal first, while others begin their oversight review by reading the service investigator’s report of investigation. According to the oversight investigators we spoke to, once they review the investigation documentation and complete the oversight worksheet, they are to forward the package to their supervisors for discussion and review. For some cases, before final approval, oversight investigators discuss the oversight review during regular meetings with other oversight investigators, and with Whistleblower Reprisal Investigation management, according to officials. Finally, officials stated that management reviews some case files before DODIG issues the approval memo back to the service IG. DODIG officials stated that they have informal weekly meetings with the oversight team to discuss cases and oversight processes; however, some of the oversight investigators we spoke with noted that they had not received any detailed guidance that was specifically focused on how to conduct oversight of service IG military reprisal cases. For the 89 oversight files in our sample, DODIG rarely disagreed with the service IG’s final determination of whether to substantiate the reprisal allegation(s), even if the oversight investigator noted deficiencies in the investigation documentation. We estimate that DODIG sent the case back to the service IG for additional work in about 8 percent of service cases closed in fiscal year 2013. DODIG disagreed with the service determination of whether to substantiate the complaint, and took over the investigation, in 2 of the cases in our sample. DODIG officials stated that oversight investigators are in regular contact with the service IG headquarters to correct inadequacies in service investigations, but that these communications may not be documented in the case files. During our case file review, we identified examples of DODIG oversight investigators not consistently completing the oversight worksheet. Specifically, from the results of our case file review, we estimate that for about 45 percent of service investigations closed in fiscal year 2013, DODIG oversight worksheets were missing narrative that indicated the investigator had thoroughly documented all case deficiencies or inconsistencies, as required on the oversight worksheet. In those 45 percent of cases, we noted issues that include the following: Case deficiencies were not consistently documented: Some service investigation case files did not contain all DODIG required elements, such as required letters, interview transcripts or summaries, legal reviews, and other supporting documentation, but the oversight investigators did not note the missing documentation on the oversight worksheet. Specifically, we estimate that in 19 percent of service investigated cases, the oversight investigator indicated that there were adequate transcripts or summaries of testimony; however, documentation of those interview transcripts was not included in the case file. DODIG did not always note deficiencies that service IG headquarters identified: We found instances in which DODIG investigators did not document deficiencies that the service IGs had identified. For example, a service IG-completed oversight worksheet, included in the investigation case file the service IG forwarded to DODIG for review, noted that the investigators did not appear fair and impartial in the servicemember interview transcript. In this interview transcript the investigator stated that in the military nothing is unbiased because there is a chain of command; however, DOD oversight investigators attested that the investigative file did not contain evidence of bias on the oversight worksheet. DODIG officials stated that there is no written requirement for oversight investigators to note deficiencies identified by the service IGs; however on oversight worksheets for other cases, the oversight investigators did note service IG-identified deficiencies. Service IG officials also highlighted inconsistencies between the oversight investigators. For example, service IG officials stated that they prefer to work with certain DODIG oversight investigators because they know what to expect from those oversight investigators, and this speeds up the oversight review. In contrast, these officials stated that they receive more questions about cases from oversight investigators with whom they work less frequently. DODIG did not always explain why deficiencies did not affect the outcome of the service investigation: In the instances when the DODIG oversight investigator identified deficiencies with the service IG investigation, the oversight investigator typically included a statement indicating that the noted deficiencies did not have a material effect on the outcome of the investigation. However, the oversight investigators did not always explain why the deficiencies did not affect the outcome of the investigation. For example, on some oversight worksheets that we reviewed, the oversight investigators noted that the service IG investigator did not analyze a protected communication or a personnel action as part of the investigation, but that these items did not affect the outcome of the investigation. We also found that the files in these cases lacked documentation of the oversight investigators’ analysis of the effect of noted deficiencies on the outcome of the investigation. Oversight investigators stated that when they note any deficiencies in investigations, they typically discuss those deficiencies with their supervisors in order to determine whether to approve the case. DODIG officials stated that there are several gray areas in reprisal investigations and that these types of discussions are common practice when DODIG is deciding whether to approve a case; however, we found in our case-file review that the results of these conversations are not always documented on the oversight worksheet. CIGIE standards state that reasonable steps should be taken to ensure that pertinent issues are sufficiently resolved and that the results of investigative activities should be accurately and completely documented in the case file. Further, Standards for Internal Control in the Federal Government provide that internal control and all transactions and other significant events need to be clearly documented, and the documentation should be readily available for examination. Ensuring that oversight investigators document the basis for their determinations regarding independent decision making enables reviewers to ensure that such determinations are appropriate. Moreover, DODIG does not have detailed guidance that specifies the steps and documentation requirements of the DODIG oversight investigators’ review of service reprisal investigations, and whether or how any noted investigation deficiencies would affect the outcome of the investigation. DODIG has focused on its October 2014 issuance of the updated Guide to Investigating Military Whistleblower Reprisal and Restriction Complaints, which details best practices for reprisal investigations, but does not specify the steps DODIG investigators are to follow when conducting oversight of service IG investigations. In addition, DODIG’s Administrative Investigations manual includes a 5-page overview of oversight reviews. However, the manual is not specific to oversight reviews of military whistleblower reprisal investigations and encompasses investigations of senior officials, and does not state what deficiencies are substantive and would affect the outcome of the investigation. Further, part one of DODIG’s Administrative Investigations manual refers investigators to a forthcoming third portion of the manual for detailed guidance on conducting oversight of military whistleblower reprisal investigations, which has not been developed. However, as of January 2015, DODIG officials stated that they were no longer planning to issue the third part of the manual and that they plan to incorporate some additional oversight procedures into the existing manual. Officials did not provide details on what procedures they plan to incorporate or when they plan to make the changes. Without additional guidance for its oversight investigator team, which would help formalize the oversight process, DODIG will continue to face inconsistency in both its oversight documentation and its review of service IG investigation outcomes. CIGIE standards state that to facilitate due professional care, organizations should establish written investigative policies and procedures. The complexity of reprisal investigations paired with the decentralized service IG structure underscores the importance of clear and consistent oversight review procedures and documentation requirements to ensure consistency across the department and that each reprisal complaint receives due professional care. Senior DODIG officials stated that DODIG’s Administrative Investigations component is taking steps to implement quality-assurance processes and that these processes will help prepare the component for an eventual peer review. For example, a senior DODIG official said that on a quarterly basis, DODIG completes an internal control checklist for 20 DODIG whistleblower reprisal investigations and 20 oversight reviews of service IG military whistleblower reprisal investigations to assess the thoroughness of the case files and the completeness of the information in the case management system, among other things. This official also stated that they brief DODIG leadership on the results of these quarterly quality-assurance checks. The Whistleblower Reprisal Investigations directorate has undergone external reviews, but CIGIE has not established peer-review criteria for administrative investigations, such as whistleblower reprisal investigations, according to DODIG officials. Senior DODIG officials stated that, if established, they would like to participate in an eventual administrative peer review of their whistleblower reprisal investigations. However, without documentation of the steps it took to reach its case determinations and why any noted case deficiencies did not affect the outcome of the investigation, as well as consistent attestation of adherence to CIGIE standards, a third-party reviewer may find it difficult to assess the quality of DODIG’s oversight process for military whistleblower reprisal investigations. DODIG and the service IGs use different terms in their guidance to refer to their investigation stages. DODIG took a step to improve guidance by issuing an updated reprisal investigation guide for military reprisal investigations for both DODIG and service IG investigators in October 2014. The guide discusses DODIG’s four questions that investigators use to determine whether the four elements of reprisal are present; various investigative steps; and, provides sample interview questions, among other things. However, DODIG describes the guide as best practices for conducting military reprisal intakes and investigations and, according to DODIG officials does not explicitly direct the services to follow DODIG’s preferred investigation process and stages. DODIG officials stated that they have no role in the development of service IG regulations. DODIG guidance describes two investigation stages: (1) intake and (2) full investigation. During the intake process, the investigator is to determine whether the servicemember made a protected communication and a responsible management official took a personnel action against the servicemember. In addition, if the investigator determines that the allegation supports an inference that the responsible management official had knowledge of the protected communication as well as a causal connection between the protected communication and the personnel action, and the servicemember reported the alleged reprisal within 1 year, the case is to proceed to a full investigation. According to DODIG’s investigation guide, during the intake process an investigator is to review the complaint, personnel action, and timeline; and interview the servicemember to clarify the allegation. During a full investigation, investigators are to formally interview the servicemember (and provide a written record of the interview), obtain relevant documentation (of the protected communication and personnel action, among other things), interview knowledgeable witnesses, interview the responsible management official who took the personnel action, and obtain a legal review of the report of investigation. Each of the service IGs has a stage between intake and full investigation, commonly referred to as a preliminary inquiry or a reprisal complaint analysis. DODIG does not have a similar in-between investigation stage, and therefore DODIG officials stated that oversight investigators should classify preliminary inquiries conducted by the service IGs as intakes in the case management system, but there is no written guidance for reviewing preliminary inquiries. We found that the service investigators typically complete much more investigative work, such as interviewing witnesses, when conducting a preliminary inquiry than DODIG requires during the intake process. Based on our case file review, we found that DODIG oversight investigators were not consistently classifying the preliminary inquiries as intakes, and classified many preliminary inquiries as full investigations in the case management system. DODIG oversight investigators approved cases as full investigations when those cases did not contain all elements required for full investigations and approved the dismissal of cases that were preliminary inquiries coded as full investigations, on a basis that can only be determined by conducting a full investigation. Specifically, we estimate that in 38 percent of preliminary inquiries closed in fiscal year 2013, service IGs dismissed cases because they determined that the responsible management official would have taken the personnel action absent the protected communication. In contrast, DODIG guidance states that an investigator answers the question of whether the responsible management official would have taken the personnel action absent the protected communication during a full investigation, which requires an interview with the responsible official to determine his or her reasons and motive for taking the personnel action. In addition, a senior DODIG official stated that an investigator must interview the responsible management official to determine whether the personnel action would have occurred absent the protected communication. However, there was no evidence in these case files that the investigator interviewed the responsible management official, and instead, investigators determined that the responsible management officials took personnel actions as a result of the servicemembers’ performance histories. Further, we found through our file review that the service IGs’ preliminary inquiry case files were less complete than the service IGs’ full investigation case files, although DODIG oversight investigators approved preliminary inquiries as full investigations. For example, based on our sample results, we estimate that at least 79 percent of service preliminary inquiries closed in fiscal year 2013 were missing at least one key element, such as interviews with the servicemember. We estimate that at least 23 percent of service full investigations closed in fiscal year 2013 were missing at least one element. Further, as previously discussed, DODIG’s guidance requires investigators to interview the servicemember for all complaints, during the intake process and if the case proceeds to a full investigations; however, we estimate that 59 percent of service preliminary inquiry case files compared to 10 percent of service full investigation case files were missing evidence of a servicemember interview. CIGIE quality standards for investigations state that to facilitate due professional care, organizations should establish written investigative policies and procedures that are revised regularly according to evolving laws, regulations, and executive orders. DODIG’s investigation guide does not discuss preliminary inquiries or define any requirements for this stage of investigation. DODIG officials have stated that they would like the service IGs to stop preparing preliminary inquiries and to use DODIG’s preferred investigation stages—intake and full investigation; however, DODIG guidance does not explicitly direct the services to use its preferred terms and stages. Additionally, a DODIG oversight investigator stated that the service IGs’ varying interpretations of DOD policy and inconsistent application of DODIG guidance makes it difficult for oversight investigators to systematically review reprisal cases. The oversight investigator also stated that DODIG should explicitly direct the services to follow certain procedures currently included in DODIG guidance, but DODIG officials stated the office does not have a role in the development of service IG regulations. Further, in the absence of standardized investigation stages, DODIG investigators miscoded investigations in fiscal year 2013. We estimate that about 43 percent of the cases that DODIG closed in fiscal year 2013 that staff coded as full investigations were not fully investigated, and were instead preliminary inquiries as indicated in the service report of investigation. DODIG officials stated that this miscoding was likely the result of oversight investigators wanting to recognize the amount of work that service IG investigators completed, since those investigators typically complete the steps of a full investigation, except for an interview with the responsible management official and a legal review. Without directing the service IGs to follow standardized investigation stages and issuing guidance clarifying how the stages are defined, it will be difficult for DODIG to ensure consistent program implementation. For example, the service IGs may do more investigative work than DODIG requires by conducting a preliminary inquiry, when DODIG would dismiss the case at intake. On the other hand, the service IGs may dismiss cases after conducting a preliminary inquiry when a DODIG investigator would conduct a full investigation and collect additional testimonial evidence. The amount of investigative work is inconsistent across DOD and is dependent on which IG investigates the complaint, which could lead to the perception that not all servicemember complaints are treated equally. In addition, without standardized investigation stages and corresponding guidance, investigators may be unclear about what elements are required for each stage of investigation, resulting in incomplete reprisal case files. Finally, without standardized investigative stages and agreement among DODIG oversight investigators about how to classify preliminary inquiries, DODIG may continue to miscode service preliminary inquiries in its case management system. Since this system is the basis for DODIG’s semiannual reports to Congress, DODIG may mischaracterize the number of fully investigated complaints in these reports. DODIG has developed tools to assess service IG investigation quality and to note any case deficiencies, but DODIG does not consistently provide the service IGs with this feedback. As previously discussed, DODIG oversight investigators are to document their reviews of service IG investigations by completing an oversight worksheet. The worksheet contains the criteria against which the reports of investigation are to be evaluated to ensure that the investigations adhered to CIGIE professional standards, such as independence and thoroughness. The worksheet also includes spaces where the oversight investigator can include comments regarding any criteria the investigation did or did not meet. According to DODIG’s Administrative Investigations manual, which guides how DODIG investigators conduct and perform oversight of reprisal investigations, upon completion of the oversight review process, investigators are to provide the service IGs with copies of the oversight worksheet. The manual further states that this affords a good mechanism for feedback to the services on the quality of individual cases, in addition to valuable information on trends in systemic deficiencies in investigations within their service. However, according to DODIG officials, in 2012 DODIG stopped providing the service IGs with completed oversight worksheets. Instead, these officials stated that they provide summarized feedback in the closure memorandums that they send to the service IGs once they approve a case. According to DODIG officials, the oversight investigators complete the oversight worksheet when reviewing service IG cases, but the worksheet is now used as an internal tool for review. Service IG officials stated that the primary feedback they receive is DODIG’s summarized case analysis on the closure memorandum, which discusses why it agreed with the service IG’s determination; however, the closure memorandum, unlike the worksheet, does not include the criteria against which the investigations are assessed. Further, service IG officials stated that they upload DODIG’s closure memorandums to their respective case management system, but they do not require the investigating officers to go into the case management system to review the closure memorandum. A DODIG oversight investigator noted that the feedback oversight investigators provide on the worksheet is more constructive than what they include on the closure memorandum, and a service IG official stated that what investigators need is constructive feedback, not just statements about what they did not do correctly. A senior service IG official stated that receiving copies of the oversight worksheets was beneficial to service investigators because the worksheets helped investigators understand what DODIG was looking for in its reviews of service investigations. Additionally, according to service IG officials, DODIG rarely sends cases back to them for additional work and rarely asks questions regarding cases they have sent to DODIG for review. Service IG officials indicated that this lack of case-specific feedback from DODIG is confirmation to them that they are meeting DODIG’s expectations for investigations; however, DODIG oversight investigators noted that the quality of service IG investigations could be improved. Further, through our review of cases closed in fiscal year 2013, after DODIG stopped providing copies of the oversight worksheets, we found examples where oversight investigators were providing case- specific feedback intended for the service IG investigators. For example, on some oversight worksheets the oversight investigator noted that the feedback provided on the worksheet was intended to be a teach-and-train vehicle to improve the quality and thoroughness of future reports; however, per DODIG’s new practice, it is unclear whether DODIG provided these oversight worksheets to the service IG investigators. DOD officials have noted that feedback to service IG investigators is important for various reasons. First, the DOD investigative process is decentralized and lacks continuity. Many offices at various levels of the service IGs investigate reprisal complaints. Further, in the Army and Air Force—which accounted for approximately 80 percent of the service investigative workload in fiscal years 2013 and 2014—military investigators typically rotate every 3 years, according to service IG officials. As such, these service IG military investigators may conduct few reprisal investigations and may not have the opportunity to develop experience, which according to DOD officials is essential to conducting high-quality reprisal investigations. The service IGs have taken steps to provide feedback to field-level investigators. For example, one service IG holds quarterly video-teleconferences with field-level investigators to share updates to reprisal policies and address any investigation trends. Second, according to service IG investigators, they receive some required training that is specific to conducting reprisal investigations when they are assigned to the IG, but there is no additional mandatory reprisal-specific training that investigators complete during the course of their careers. Additionally, these investigators may not have opportunities to apply lessons learned from that training immediately, and according to DOD officials there is often a gap of over a year between training and reprisal investigation assignment. According to CIGIE quality standards for investigations, organizations should establish appropriate avenues for investigators to acquire and maintain the necessary knowledge, skills, and abilities. Service IG investigators noted that in addition to offered training, case-specific feedback is a good way to learn skills for conducting reprisal investigations; however, three of six field-level investigators we interviewed stated that they had never received feedback from DODIG on their reprisal investigations. If the service IG investigators do not receive copies of the oversight worksheet, they may not have knowledge of the criteria that DODIG uses to conduct its oversight reviews and whether their investigative reports are meeting the specific CIGIE standards that DODIG has incorporated into its oversight review. For example, three of six field-level investigators we interviewed had not seen a DODIG oversight worksheet, and two of those three investigators did not know that DODIG used a worksheet to conduct oversight. DODIG’s October 2014 guide for investigating reprisal complaints includes a quality-assurance review checklist, modeled after the DODIG oversight review worksheet, that investigators can use to perform a quality-assurance review of their investigation. However, as previously discussed, service IG investigators are not subject to or consistently trained to CIGIE standards and therefore may not know how to assess their investigations according to these standards. Without receiving case-specific feedback, which relates to the CIGIE standards against which DODIG assessed the investigation and notes any deficiencies, service investigators may not be able to assess their own subsequent investigations. Further, without coordination with the service IGs to ensure that service investigators are receiving case-specific feedback from DODIG, DODIG efforts to improve investigation quality may continue to face challenges. Finally, without case-specific feedback, service IGs may not be able to identify trends in systematic deficiencies or specific CIGIE standards not being met, which otherwise might be corrected in future investigations and incorporated into their feedback to field-level investigators. DODIG and the service IGs have processes for investigators to recuse themselves from investigations, but there is no process for investigators to document whether the investigation they conducted was independent and outside of the chain of command. CIGIE standards state that in all matters relating to investigative work, the investigative organization must be free, both in fact and appearance, from impairments to independence. Impairments to independence include professional or personal relationships that might weaken the investigative work in any way, and preconceived opinions of individuals or groups that could bias the investigation, among others. In the absence of a process for investigators to certify their independence, DODIG has incorporated various questions into its oversight review in order to document the independence of the investigator and to determine whether the investigation was conducted in accordance with CIGIE standards. For example, DODIG oversight investigators indicate whether the investigator was outside the chain of command of the servicemember and responsible management official, which is statutorily required. DODIG’s oversight investigators—of which all but one has prior military experience—stated that they use their experience and knowledge of the service’s organizational structures to determine whether the investigator was outside the chain of command. Oversight investigators further determine on the current version of the oversight worksheet whether the investigator maintained professionalism and demonstrated IG impartiality during interviews. Oversight investigators stated that they can determine whether the investigator was impartial during interviews only if the case has interview transcripts, which the Administrative Investigations Manual instructs them to read if necessary; however, DODIG will accept summarized interviews and does not require that the service IGs provide verbatim transcripts for all interviews. Based on our sample, we estimate that 43 percent of cases closed in fiscal year 2013 have transcripts of interviews with the servicemember alleging reprisal and 26 percent of cases have transcripts of responsible management official interviews. In the absence of interview transcripts, oversight investigators have limited tools to determine whether the investigator demonstrated IG impartiality during interviews. DOD officials stated that their recusal policies and decentralized investigation structure, removing the investigator from the chain of command, adequately address independence and that no further documentation of independence is needed. However, during our case-file review we reviewed oversight worksheets where DODIG oversight investigators had noted potential impairments to investigator objectivity in the report of investigation. For example, on one oversight worksheet, the oversight investigator stated that the report gave the appearance of service investigator bias, and further clarified that the report should state whether the responsible management official’s actions were reasonable and supported by facts, not whether the investigator would have taken the same actions. In addition, on another oversight worksheet the DODIG investigator stated that the investigator’s narrative in the report of investigation contained comments that would bring into question whether the analysis was impartial and unbiased, further noting that there was evidence of bias. Further, one oversight worksheet stated that the investigator was not outside the chain of command, as statutorily required, but that it had no effect on the investigation. DODIG approved these cases without documenting how it reconciled these case deficiencies. We are not questioning DODIG’s judgment in these cases. We noted that the files in these cases did not address the issues identified by the oversight investigator beyond the final approval of the case. However, Standards for Internal Control in the Federal Government provides that internal control and all transactions and other significant events need to be clearly documented, and the documentation should be readily available for examination. Without documenting the basis for their determinations regarding independent decision making, DODIG cannot ensure that such determinations are appropriate. One oversight investigator we interviewed stated that DODIG has received investigations from the service IGs where the investigations show clear signs of bias, even though the investigator was outside the chain of command. According to this investigator, in these instances, DODIG’s options include returning the case for additional investigation, appointing a new investigator, or preparing additional case analysis addressing the bias. Further, some service IG reviews of investigations also noted potential impairments to objectivity. For example, a service IG forwarded a completed investigation to DODIG for approval, noting that the investigators did not appear fair and impartial in a servicemember interview transcript; however, the oversight investigator stated that there was no evidence of bias by the investigating officer. Service IG officials stated that their review of field-level investigations is important because they have received investigations that contain personal opinion and statements that make it appear that the investigator was not impartial. These officials stated that, through their review, they attempt to identify and correct these statements, and that DODIG’s subsequent review of the case should also catch any instances where an investigator did not appear impartial. Service IG officials noted that, because investigators are so close to the investigation, they can become invested in the investigation and that this investment is sometimes evident in reports of investigation. Guidance for documenting independence is included in generally accepted government auditing standards (GAGAS). While these standards apply to audits, they can also provide guidance to service IGs as a best practice on how to document decisions regarding independence when conducting reprisal investigations. Documentation of independence considerations provides evidence of the judgments in forming conclusions regarding compliance with independence requirements. Further, GAGAS notes that an organization should establish policies and procedures in its system of quality control that address independence. While GAGAS states that insufficient documentation of compliance with the independence standard does not impair independence, documentation of independence in a reprisal investigation could improve the quality of DODIG’s investigations. Without a process for investigators to document that the investigation was independent and outside the chain of command, DODIG and the service IGs will be hindered in their efforts to monitor the independence of investigations. DODIG oversight investigators are responsible for assessing the independence of the investigator and the investigation. Absent direction from DODIG to the service IGs to provide certifications that the investigator was independent and outside of the chain of command, DODIG oversight investigators have few mechanisms to determine whether the investigation was independent during the oversight process. With the pending expansion of DODIG’s case management system to the service IGs, the certification process could be incorporated, for example, into the case management system. Further, such a certification process would serve as an accountability mechanism for service IG investigators, should an oversight investigator or service IG official note any potential impairments to objectivity during their reviews of investigations. Finally, certification of investigator independence could decrease the potential for bias in military reprisal investigations and better ensure that servicemembers receive the whistleblower protections provided by law. Whistleblowers play an important role in safeguarding the federal government against waste, fraud, and abuse, and their willingness to come forward can contribute to improvements in government operations. As a result, it is important that DOD have a process for investigating whistleblower reprisal complaints that affected parties have confidence is timely, effective, and impartial. One way in which such confidence can be undermined is if investigations and related communications with the servicemembers are not timely and accurate. Reducing delays in investigations and notifications when the process will take longer than 180 days would provide servicemembers with information that may affect their immediate work environment or personnel actions, which are typically halted during an active investigation, since servicemembers generally do not receive relief from reprisal until DODIG has approved a substantiated investigation. Ultimately, the absence of regular status updates, such as revised case-completion estimates when time frames shift, may discourage servicemembers from coming forward to report wrongdoing. Another area in which DODIG processes are lacking is in data collection and monitoring for oversight of investigations at the service IG level. DODIG has made progress in this regard since our February 2012 report by implementing a new case management system, but it remains under development and, as of March 2015, does not yet meet DODIG’s full reporting needs. Without additional internal guidance to staff on how to use the case management system for real-time case processing, DODIG cannot assure efficient reporting and that the data it collects are up to date and accurate. Absent these actions, along with developing an implementation plan for expansion of the case management system to the service IGs, DODIG will not have complete visibility of service IG workload and timeliness. DODIG also cannot ensure that all military whistleblower reprisal investigations adhere to quality standards. For instance, the complexity of reprisal investigations underscores the need for clear and consistent oversight review procedures and documentation requirements. DODIG took a positive step by establishing a team of investigators that is solely dedicated to the review of service IG investigations. However, without additional guidance regarding how to review service IG investigations, which would help to formalize the oversight process, DODIG cannot ensure that it treats reprisal complaints consistently and with due professional care. In addition, consistency across DODIG and service IG investigations, especially in regard to investigation stages, will be limited without guidance that clarifies the amount of investigative work an investigator is to conduct at each stage and leads to the perception that not all servicemember complaints are treated equally. Additionally, without providing case-specific feedback that includes the criteria DODIG oversight investigators use to assess service investigations, service investigators may be limited in their ability to improve the quality of subsequent investigations. Finally, DOD may not be able to enhance the perception of fairness and increase accountability without taking steps to develop and implement a process for investigators to certify their independence when conducting investigations. Absent these actions, DODIG will be limited in its ability to enhance the effectiveness of its oversight, prepare for the eventual peer review in which senior leadership would like to participate, and ensure that servicemembers receive the whistleblower protections provided by law. To improve the military whistleblower reprisal investigation process and oversight of such investigations, we recommend that the Secretary of Defense work in coordination with the Department of Defense Inspector General (DODIG) to take the following seven actions: develop an automated tool to help ensure compliance with the statutory 180-day notification requirement by providing servicemembers with accurate information regarding the status of their reprisal investigations within 180 days of receipt of an allegation of reprisal; issue additional guidance to investigators on how to use the case management system as a real-time management tool, and update and finalize the draft internal user guidance from 2012 as necessary until the case management system is complete; working in coordination with the service IGs, develop an implementation plan that addresses the needs of DODIG and the service IGs, and defines project goals, schedules, costs, stakeholder roles and responsibilities, and stakeholder communication techniques for expansion of the case management system; issue additional guidance to formalize the DODIG oversight process; direct the services to follow standardized investigation stages and issue guidance clarifying how the stages are defined; ensure that the mechanism it uses for feedback to service investigators includes the criteria against which the investigation was assessed and any deficiencies, and work with the service IG headquarters to ensure that feedback is shared with the service investigators; and develop and implement a process for investigators to document whether the investigation was independent and outside of the chain of command and direct the service IGs to provide such documentation for review during the oversight process. In commenting on a draft of this report, DODIG concurred with each of our seven recommendations. However, DODIG did not agree with the manner in which we presented the findings in the report and raised concerns that we did not include information relating to significant progress made by DODIG since our February 2012 report. DODIG’s comments are reprinted in appendix III. DODIG also provided technical comments, which we considered and incorporated where appropriate. We disagree with DODIG’s characterization of our report’s findings because we included discussion of the improvements cited by DODIG throughout our report. For example, we noted increases in staff levels, DODIG’s development of a new case management system, DODIG’s October 2014 issuance of a military whistleblower reprisal investigations guide, and policy guidance to the service IGs regarding 180-day notification requirements, among others. Further, in its comments, DODIG stated that it takes its role in leading DOD’s whistleblower protection program seriously and has invested significant resources, more so than other federal agencies, to improve the timeliness and quality of its investigations. In addition, DODIG highlighted the volume of complaints that it processes. We agree that DOD’s program is large, and believe that our current recommendations are critical to aid DODIG in attaining its goal of being the model whistleblower protection program in the federal government. Our responses to additional comments made by DODIG on our report’s findings are included at the end of appendix III. In concurring with our first recommendation that DODIG develop an automated tool to help ensure DOD compliance with the statutory 180- day notification requirement, DODIG stated it had already implemented a dashboard in its case management system that identifies investigations pending for 180 days and that it would work toward an even more automated notification process in the future. We believe that an automated tool to help ensure DOD compliance with statutory requirements is needed and that the dashboard alone does not serve this intended purpose. Based on our case file review, we found that in the estimated 53 percent of cases in which DOD sent the required 180-day notification letters for cases closed in fiscal year 2013, the notifications that DOD provided were sent after 180 days. Specifically, we estimated that DOD’s median notification time was on average 353 days after the servicemember filed the complaint, almost twice as long as the 180-day requirement. The dashboard that DODIG uses to track cases does not proactively alert DOD to send the 180-day letter. Importantly, as we stated in our report, DODIG’s case management system did not have record of at least 22 percent of service investigations both open as of September 30, 2014, and closed in fiscal years 2013 and 2014. Without knowledge of these cases, DODIG cannot ensure that the service IGs sent 180-day notification letters for cases taking over 180 days to complete. We believe that an automated tool that proactively alerts DOD to send the required 180-day notification letter for all cases taking longer than 180 day days could help to ensure DOD’s full compliance with statutory notification requirements. In concurring with our second recommendation that DODIG issue additional guidance to investigators on how to use the case management system as a real-time management tool, DODIG stated that we misrepresented DODIG’s focused effort to migrate paper-based 2013 data into a new electronic system and correct data deficiencies in order to ensure data reliability. We disagree. In our report, we note that DODIG officials told us that the case management system is to serve as a real- time complaint tracking and investigative management tool for investigators. Further, in its comments, DODIG highlights the guidance and training it has implemented related to its case management system. During our case file review, we found that personnel uploaded key case documents to the case management system after DODIG had closed the case in 77 percent of cases closed in fiscal year 2013 and made changes to case variables in 83 percent of cases in 2014. DODIG staff made these changes at least 3 months after case closure and at least a year after DODIG implemented the database in December 2012, indicating that it was not being used for real-time case tracking for cases closed in fiscal year 2013. Further, despite DODIG’s stated efforts to train investigators and ensure data consistency, we found significant instances of coding errors where DODIG personnel were coding partially completed service investigations as full investigations. Specifically, we estimate that for cases closed in fiscal year 2013, 43 percent of cases that DODIG investigators coded as fully investigated were only partially investigated. As a result, we believe that additional guidance on how to use the case management system may help ensure that DODIG has awareness of the real-time status of cases and the reliability of DODIG’s data. In concurring with our third recommendation that DODIG work in coordination with the service IGs to develop an implementation plan for the expansion of the case management system, DODIG stated that we did not acknowledge the steps it has already taken to develop an implementation plan. We disagree. As we note in the report, DODIG officials stated during our review that they were developing an implementation strategy for the expansion of the case management system, but that they did not have an implementation plan. DODIG stated that it has taken additional actions since January 2015 to plan for the expansion of the case management system, such as developing a demonstration environment to define the requirement gaps. We believe that these actions are positive steps and that they will provide a strong foundation for the development of an implementation plan, which could help position DODIG to monitor the case management system expansion and measure project success. In concurring with our fourth recommendation that DODIG issue additional guidance to formalize the DODIG oversight process, DODIG stated that its investigations manual already provides formal guidance to DODIG investigators for conducting oversight reviews of service IG military reprisal investigations and that within the next 90 days it will develop additional guidance on conducting oversight reviews, such as how to evaluate and document deficiencies, including those that did not affect the overall outcome of the investigation. We disagree that DODIG’s investigations manual already provides formal oversight guidance. We reviewed the 5-page chapter in DODIG’s manual on oversight of service IG investigations, and we found that it does not detail the steps and documentation requirements of an oversight review, is not specific to military whistleblower reprisal investigations, and does not state what deficiencies are substantive and would affect the outcome of an investigation. We believe that DODIG’s stated plan to develop additional guidance, including how to evaluate and document deficiencies, could better ensure the consistency of DODIG’s oversight reviews and that all reprisal complaints receive due professional care. In concurring with our fifth recommendation that DODIG direct the services to follow standardized investigation stages and issue guidance clarifying how the stages are defined, DODIG stated that its October 2014 military whistleblower reprisal investigations guide describes DODIG’s intake process and that its Directive 7050.06, which was reissued in April 2015, establishes a timeline for completing the intake process in 30 days. We disagree that the guidance provides the needed instructions for investigators. We acknowledged in the report that DOD’s issuance of updated guidance is a positive step; however, DODIG describes its guide as a best practice for conducting military reprisal intakes and investigations and does not explicitly direct the services to follow DODIG’s preferred stages. In addition, it does not discuss the service IGs’ use of preliminary inquiries to dismiss cases after only a partial investigation, a practice DODIG stated it ended 3 years ago. We believe that standardized investigative stages may better ensure consistent program implementation and that all servicemember complaints are treated equally. In concurring with our sixth recommendation that DODIG ensure that feedback to service investigators includes the criteria against which the investigation was assessed and any deficiencies, and that feedback is shared with the service investigators, DODIG stated that within the next 60 days it will resume its prior practice of sending oversight worksheets to the service IGs. Those worksheets will include the criteria against which the service’s intake or investigation was reviewed as well as clear explanations of deficiencies and whether they affected the outcome of the case. DODIG also stated that it will work with the services to develop a mechanism by which results will be shared with service investigators. We believe that the steps DODIG noted in its response could improve the quality of future service IG investigations and better ensure that investigative reports meet the CIGIE standards that DODIG has incorporated into its oversight review. In concurring with our seventh recommendation that DODIG develop and implement a process for investigators to document whether the investigation was independent and outside of the chain of command, DODIG stated that within the next 60 days it will develop and implement such a process. Specifically, it stated that the process will require service investigators to attest in writing that they are outside the immediate chain of command of both the servicemember alleging reprisal and the alleged responsible management officials. Although such an attestation is a positive step, we believe that the service investigators should also attest to whether the investigation was independent. DODIG oversight worksheets we reviewed noted impairments to investigator objectivity in reports of investigation even though the service investigator was outside of the chain of command. We believe that an attestation that the investigation is both independent and outside of the chain of command could help serve as an accountability mechanism for service IG investigators and decrease the potential for bias in military whistleblower reprisal investigations. We are sending copies of this report to the Secretary of Defense; the Department of Defense Inspector General (DODIG); the Inspectors General (IG) of the Air Force, the Army, the Marine Corps, and the Navy; and appropriate congressional committees. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3604 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. To address our objectives, we used two primary sources of data, including (1) closed military whistleblower reprisal case data from the Department of Defense Office of Inspector General’s (DODIG) case management system and (2) a randomly selected sample of DODIG’s closed military whistleblower reprisal case files. DODIG provided us with information for all military whistleblower reprisal cases closed from October 1, 2011, through September 30, 2014, and all cases open as of October 1, 2014. We were unable to report the fiscal year 2012 data because DODIG transitioned to a new case management system in December 2012, and data from fiscal year 2012 were not reliable as a result of the data migration, according to DODIG officials. In addition, DODIG officials told us that they verified and corrected data as necessary for all cases closed in fiscal years 2013 and 2014 because the case management system was new and investigators had not been consistently recording information. We assessed the reliability of DODIG’s fiscal years 2013 and 2014 data—by reviewing related documentation, interviewing knowledgeable officials, and comparing selected fields, such as open and closed dates, with case file records from our sample—and concluded that the data were sufficiently reliable for reporting the average lengths of investigations. Further, we used the data for cases closed in fiscal year 2013 to select the sample for our case-file review, discussed below. We chose cases from this period for the file review because of DODIG’s case management system transition in December 2012 and statements from DODIG officials that data from cases closed in the old case management system were not as complete as data from cases closed in the new case management system. We also chose this period because the National Defense Authorization Act for Fiscal Year 2014, effective December 26, 2013, expanded the amount of time a servicemember has to report a reprisal allegation from 60 days to 365 days. We selected a stratified random sample of 135 cases from the 538 cases closed in fiscal year 2013. We stratified the population into six strata by combining three categories of case status and two categories of investigation status (see table 5 below). We calculated the sample sizes to achieve a desired precision of plus or minus 10 percentage points or fewer for a percentage estimate of the total population (N=538) at the 95 percent confidence level. We then adjusted the sample sizes to achieve a desired precision of plus or minus 10 percentage points or fewer for a percentage estimate at the 95 percent confidence level for DODIG Oversight cases (N=344, strata 3 and 4) and Fully Investigated cases (N=203, strata 1, 3, and 5). During the course of our review, we removed 11 out-of-scope cases, which reduced the original sample size from 135 to 124, because we found that 2 of the cases were open, 1 of the cases was classified and had limited documentation to review, and 8 cases were investigations of improper mental health examinations and not reprisal. This reduced sample of 124 cases is generalizable to the estimated population of in- scope cases. We generalized the results of our sample to the estimated population of 498 cases DODIG closed in fiscal year 2013. All estimates of percentages presented in this report have a margin of error of plus or minus 10 percentage points or fewer, unless otherwise noted. Further, all estimates of medians and averages presented in this report have a relative error of plus or minus 20 percent of the estimate, unless otherwise noted. To determine the extent to which the Department of Defense (DOD) has met statutory notification requirements and internal timeliness requirements for completing military whistleblower reprisal investigations, we calculated the timeliness of cases using case data from DODIG’s case management system for military whistleblower reprisal cases closed in fiscal years 2013 and 2014 and compared the average timeliness to the regulatory 180-day requirement. We removed one closed case from the timeliness calculations because the record produced a negative case processing time because the closed date preceded the open date. In addition, for all cases that were open as of September 30, 2014, we analyzed how long the cases had been open, according to the fiscal year in which the complaints were received. To determine the extent to which DOD met the statutory requirement to notify servicemembers in cases lasting longer than 180 days about delays in the investigation in fiscal year 2013, we reviewed the 124 case files in our sample for evidence that DOD had sent the required letter in cases lasting longer than 180 days. For cases where there was evidence that DOD had sent the required letter, we recorded the reasons provided for the delay as well as the estimated completion date. We calculated the median estimated time frame in the letters and compared this to the median completion date for these cases to determine the accuracy of DOD’s estimated time frames. In order to assess the reliability of DODIG’s data, we used case file documentation to determine the open and close dates of the 124 cases in our sample and calculated total case time for each case. We then compared the total case time we recorded for the sample cases to the total case time for those cases in DODIG’s data and we found a mean difference of 2 days. We further assessed the data through discussions with officials responsible for the data and concluded that the data were sufficiently reliable for reporting the average lengths of investigations. Further, we reviewed relevant documents including 10 U.S.C. § 1034, as amended, and its implementing directive on military whistleblower protections, DOD Directive 7050.06, Military Whistleblower Protection (July 23, 2007). After we sent our draft report for comment, DODIG issued an updated Directive on April 17, 2015, which we also reviewed. Finally, we interviewed officials about methods for tracking investigations and processes for sending required notifications to servicemembers that allege reprisal. We also collected relevant documentation, such as standard operating procedures and investigative guidance from DODIG, and the service Inspectors General (IG) for the Air Force, the Army, the Marine Corps and the Navy. We also spoke with officials from DODIG’s Information Systems directorate to determine which variables to request from DODIG’s case management system. To determine the extent to which DODIG’s whistleblower case management system supports oversight of the military whistleblower reprisal program, we obtained and analyzed closed case data from each of the service IGs for cases closed from fiscal year 2012 through fiscal year 2014. We assessed the reliability of service IG data from fiscal years 2013 and 2014—by reviewing related documentation and interviewing knowledgeable officials—and concluded that the data were sufficiently reliable for our purposes. We compared selected variables for all cases by matching DODIG’s data to the service IG data to identify duplicate cases and missing information, and to determine whether DODIG has visibility of all ongoing and closed military whistleblower reprisal cases. We selected the variables present in both DODIG’s and the service IGs’ data to compare for matching cases in consultation with DODIG and service officials, and those variables include servicemember name, case identifiers, open date, and closed date. Further, we interviewed DODIG officials responsible for the development of the case management system and the proposed expansion of the case management system to the service IGs and collected supporting documentation. We also reviewed DOD memorandums regarding the case management system expansion and cost information for the next phase of case management system development and compared these documents to relevant program management criteria. In addition, we interviewed officials from DODIG’s Administrative Investigations component as well as its Whistleblower Reprisal Investigations and Investigations of Senior Officials directorates, and service IG officials regarding the case management system expansion. To determine the extent to which DOD has processes to ensure oversight of military whistleblower reprisal investigations conducted by the service IGs, we used our stratified random sample of 124 case files retained by DODIG for military whistleblower reprisal cases that DODIG closed from October 1, 2012, through September 30, 2013. Based on our review of whistleblower reprisal investigation policies and procedures and quality standards for investigations established by the Council of the Inspectors General on Integrity and Efficiency (CIGIE), we created a data-collection instrument to identify the key characteristics of whistleblower reprisal cases, determine the reliability of various fields in the case management system, and assess the completeness and quality of files. We also developed a standard approach to electronically review files, using DODIG’s new case management system, to ensure we reviewed all cases consistently. For example, for all cases, we reviewed the original complaint followed by the report of investigation and interview transcripts, among other things. We refined this data-collection instrument and our approach by first reviewing 12 pilot case files selected by DODIG that were not part of the 135 originally identified in the sample. Specifically, the pilot consisted of cases that DODIG approved in the first three quarters of fiscal year 2014, including 3 cases investigated by DODIG, 3 cases investigated by the Army, 2 cases investigated by the Air Force, 2 cases investigated by the Navy, and 2 cases investigated by the Marine Corps. Of those 12 cases, 11 were fully investigated and 6 were substantiated. After the pilot, our methodology for reviewing the randomly sampled cases required each case to be reviewed first by one analyst and then reviewed by a second analyst who noted any disagreement with the first analyst’s assessment. Analysts discussed the areas of disagreement and resolved any disagreement by identifying and reviewing supporting documentation in the case files. Further, two GAO investigators with professional investigative experience reviewed a portion of the sample and concurred with the analysts’ assessment of the cases, in accordance with CIGIE guidelines for quality-assurance reviews. We did not question DODIG’s judgment in these cases. To assess case-file completeness, we reviewed DODIG’s process, 10 U.S.C. § 1034, directive, and other guidance and consulted with DODIG officials and identified 13 elements to include in our case-file review. These 13 elements support the conclusions reached in the case, indicate compliance with the law or directive, or manage the internal communication not specifically outlined by law or directive. The 13 elements we included for our case file review are the following: 1. notification to DODIG from the service IG that received the complaint, 2. evidence supporting the recommended outcome, 4. report of investigation or other written product, 6. interview with servicemember, 7. interview with responsible management official, 8. DODIG oversight worksheet, 9. correspondence between DODIG and the servicemember regarding investigations taking longer than 180 days, 10. correspondence between DODIG and the Secretary of Defense regarding investigations taking longer than 180 days, 11. record of corrective action taken, 12. correspondence between DODIG and the service IGs regarding the final case outcome, and 13. correspondence between DOD and the servicemember regarding the final outcome of the case. Some of these elements included specific documents. For example, the DODIG oversight worksheet (item 8 above) was a specific document. Other elements could be reflected in multiple documents. For example, the evidence supporting the recommended outcome (item 2 above) could be in a larger report, be in a summary, or be its own document. We determined the completeness of each case file selected in our sample individually since not all 13 elements were necessary in every case. For example, some of the 13 elements would only need to be present in a file if an investigation was conducted by a service IG or was a full investigation. We adjusted the required number of elements based on the specific circumstances of each case and calculated completeness based on that adjusted baseline. We categorized the case files by the average number of elements missing for each type of case, dismissed DODIG intakes, service IG preliminary inquiries, service IG full investigations, and DODIG full investigations. We also interviewed investigators and supervisors on DODIG’s oversight team and officials at each of the service headquarters IGs. In addition, we interviewed six field-level investigators from the Army, the Navy, and the Air Force IGs regarding required training, available guidance, and investigative processes, including assessing independence. We used data provided by each of the services for cases closed in fiscal year 2014 to select investigators for interviews. We used a simple random sampling technique to select investigators for interviews. We selected 12 investigators from the 216 investigations closed by the Army, 10 investigators from the 35 investigations closed by the Navy, and 10 investigators from the 110 investigations closed by the Air Force. Since field-level service IG investigators typically rotate every 2 to 3 years, we were able to contact and speak with two investigators from each service IG. In addition, we reviewed training materials, guidance, and requirements for investigators from DODIG and each of the service IGs as well as their processes for assessing investigator independence. We also attended training sessions related to conducting military whistleblower reprisal investigations at DODIG and the Army IG as well as 2 DODIG Administrative Investigations training symposia, which contained sessions on whistleblower reprisal investigations, and interviewed an official from CIGIE’s Advanced Training Institute. Additionally, we compared DOD’s independence processes to CIGIE quality standards for investigations and Generally Accepted Government Auditing Standards (GAGAS). We conducted this performance audit from April 2014 to May 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This appendix provides information on the characteristics of military whistleblower reprisal cases based on our case file review of 124 cases closed from October 1, 2012, through September 30, 2013, as well as data from the Department of Defense Office of Inspector General’s (DODIG) case management system for cases closed in fiscal years 2013 and 2014. Generally, the service affiliations of the servicemembers that alleged reprisal did not match the overall proportions in the military population. See figure 3 for a comparison of the servicemember population proportion by service compared to the proportion of reprisal cases closed. Through our file review of cases closed in fiscal year 2013, we estimate that the majority of servicemembers filed reprisal complaints with a service Inspector General (IG) (70 percent). Servicemembers also filed reprisal complaints with the DODIG Hotline (23 percent) and through Members of Congress (6 percent). According to Department of Defense (DOD) Directive 7050.06, a servicemember who makes or prepares to make a protected communication is a whistleblower. Based on our review of case files closed in 2013, we estimate that the primary reasons for making a protected communication are to report allegations of a violation of law or regulation (49 percent), abuse of authority (39 percent), or a general communication to the IG (23 percent). Other reasons for making a protected communication included funds or resource waste (14 percent), public health or safety danger (11 percent,) and sexual assault (8 percent), among others. DOD officials told us that regulations cover virtually every aspect of military life, including how to conduct personnel ratings, so servicemembers often cite violations of regulations in their complaints. About 40 percent of cases in our sample included a protected communication regarding a personnel regulation violation. Figure 4 shows the reasons servicemembers made protected communications by frequency. Further, based on our case file review, we estimate that the primary authorized recipients of protected communications for cases closed in fiscal year 2013 were the chain of command (62 percent), Inspectors General (53 percent), and Members of Congress (18 percent). DOD officials told us that the inclusion of the chain of command in the list of authorized protected communication recipients has resulted in an increase in the number of servicemembers that qualify as whistleblowers because reporting issues to the chain of command is a standard military procedure. Figure 5 shows the authorized recipients to whom servicemembers made protected communications by frequency. A whistleblower reprisal complaint must also include an allegation that an action was taken in reprisal against a servicemember. DOD Directive 7050.06 defines reprisal as taking or threatening to take an unfavorable personnel action, or withholding or threatening to withhold a favorable personnel action, for making or preparing to make a protected communication. Based on our file review of cases closed in fiscal year 2013, we estimate that the most common forms of reprisal alleged by servicemembers were that they received a poor performance evaluation (44 percent), disciplinary action (39 percent), or an unfavorable assignment or reassignment (27 percent). Figure 6 shows the frequency of the various types of personnel actions. DODIG evaluates cases and generally closes them based on the answers to four questions, which investigators use to determine whether a case has all of the elements of reprisal. Specifically: (1) Did the servicemember make or prepare to make a protected communication, or was the servicemember perceived as having made or prepared to make a protected communication? (2) Was an unfavorable personnel action taken or threatened against the servicemember, or was a favorable personnel action withheld or threatened to be withheld following the protected communication? (3) Did the responsible management official have knowledge of the servicemember’s protected communication or perceive the servicemember as making or preparing to make a protected communication? (4) Would the same personnel action have been taken, withheld, or threatened absent the protected communication? Based on our review of randomly selected case files closed in fiscal year 2013, we estimate that the most common reason for closing a case was that DODIG determined that the responsible management official would have taken the personnel action absent the protected communication (question 4—37 percent), which means that the servicemember’s protected communication did not have an effect on the responsible official’s decision to take the personnel action. DODIG also closed cases because the servicemember did not make a protected communication (question 1—4 percent), there was no personnel action (question 2—9 percent), or the responsible management official who took the personnel action had no knowledge that the servicemember made or prepared to make a protected communication (question 3—3 percent). Additional reasons DODIG closed cases included timeliness—the servicemember did not file a reprisal complaint within 60 days of gaining knowledge of the personnel action—nonresponsive servicemembers, and withdrawals, See figure 7 for DODIG’s reasons for closing among other reasons.military reprisal cases by frequency. Further, based on our case-file review of cases closed in fiscal year 2013, we estimate that the service IGs closed the majority of cases in fiscal year 2013 (70 percent) after conducting a preliminary inquiry and prior to a full investigation. Our analysis of DODIG data on military whistleblower reprisal cases closed in fiscal year 2014 shows that DODIG substantiated 9 percent of the cases that were fully investigated by DODIG investigators. In addition, our analysis shows that the service IGs substantiated 6 percent of cases that proceeded past the intake phase. DODIG officials stated that they calculate substantiation rates by the number of cases substantiated out of the number of cases fully investigated; however, as discussed in the report, we are unable to report on the total number of cases fully investigated by the service IGs because DODIG’s data were not reliable for this purpose. As such, we report the service IGs’ substantiation rates out of the number of cases that proceeded to further investigation after meeting the general intake requirements—a personnel action following a protected communication. See table 6 for fiscal year 2013 and 2014 substantiation rates. The following are GAO’s comments on the Department of Defense Inspector General’s (DODIG) letter dated May 1, 2015, in addition to our evaluation of agency comments on page 50. 1. We disagree with DODIG’s statement comparing the timeliness of its intake process because we were not able to compare the timeliness of cases by case type in our 2012 report with this report due to DODIG data limitations. Specifically, in our 2012 report, we found that DODIG’s data were not reliable for the purposes of reporting investigation lengths and therefore used sample data to report the timeliness of cases DODIG closed between January 1, 2009 and March 31, 2011. We reported on the number of cases closed before full investigation and cases that were full investigations, which we determined by reviewing the case file documentation. In this report, we found DODIG’s timeliness data reliable for our purposes of reporting the average lengths of investigations; however, we were not able, using DODIG’s data, to distinguish between the number of cases that were fully investigated by the service IGs and the number of cases that the services closed with some investigative work, but prior to a full investigation. DODIG’s data were not reliable for these purposes due to DODIG coding errors. 2. We disagree with DODIG’s statement that it met statutory notification requirements in the majority of closed cases because it did not always meet those requirements. Specifically, in 2012, we found that DOD had stopped providing any notifications to servicemembers. In 2015, we found that DOD notified servicemembers about the status of investigations that took longer than 180 days in an estimated 53 percent of the cases that required notification. In those instances where the letters were provided to servicemembers, we estimated that DOD’s median notification time was on average 353 days after the servicemember filed the complaint, almost twice as long as the 180-day requirement. We acknowledge that DOD’s decision to reestablish the practice of sending 180-day notification letters is a positive step; however, we continue to believe that notifying servicemembers about half of the time is not in accordance with statutory requirements and that DOD should send the letters within 180 days of receipt of an allegation of reprisal, not on average of 353 days after receipt. 3. We disagree with DODIG’s statements regarding our characterization of its case management system, because we concluded that DODIG does not have complete oversight of all service reprisal investigations. Specifically, a large amount of detailed information about the cases, such as investigative events, resides in the services’ case management systems. Further, we found that DODIG’s system did not have record of at least 22 percent of service investigations both open as of September 30, 2014, and closed in fiscal years 2013 and 2014. DODIG is responsible for the oversight of these cases. In addition, we believe that DODIG’s agile development of the case management system—and the large gaps between development phases—may be the cause of some of the issues we found. DODIG officials told us that the length between phases of development was longer than originally intended by DODIG, and the system still needs to refine some of its capabilities, such as aggregating and extracting data for reporting purposes. DODIG intended to complete the system in February 2014 and still has not done so over a year later. Further, we found that DODIG made changes to its data in March and April of 2014, after it was notified of our audit. We believe that DODIG should have been making sure its data were reliable on an ongoing basis. DODIG also stated that we did not address its approaches for ensuring data reliability; however, we did include a discussion of some of these approaches in our report, such as its dashboards to identify errors, and its quarterly quality assurance processes, on pages 24 and 37. Finally, DODIG listed system capabilities, such as the ability to track overall case age, which we incorporated into the report and about which we noted limitations where relevant. 4. We disagree with DODIG’s statements regarding feedback it provides to service IG investigators because DODIG’s Council of the Inspectors General on Integrity and Efficiency (CIGIE) trainings do not reach all field-level investigators, as we stated in our report. In addition, the sample case-closure memorandum that DODIG provided to us did not contain such criteria. Further, in our report, we define the criteria against which DODIG oversight investigators assess service IG investigator independence. However, we found that in the absence of interview transcripts, which were present for servicemember interviews in only 43 percent of cases closed in fiscal year 2013, oversight investigators have limited tools to determine whether the investigator demonstrated IG impartiality during interviews. 5. We disagree with DODIG’s comment that we did not include information related to DODIG’s progress since 2012 because we addressed DODIG’s stated improvements on the following pages in our report: (1) DODIG’s staffing increases, p. 19; (2) new case management system, p. 21; (3) data clean-up to ensure data reliability, p. 25; (4) issuance of policy guidance to the service IGs regarding the 180-day notification requirements, p.11; (5) Administrative Investigations manual, pp. 42; (6) issuance of October 2014 military whistleblower reprisal investigations guide, p. 38; and (7) reissuance of DOD Directive 7050.06. The directive was issued on April 17, 2015, after we sent our draft report to DOD for agency comments, and we incorporated it into our final report as necessary, p. 52. However, the directive dated July 2007 was in place during the scope of our review and, as such, we used it for criteria where applicable. In addition to the contact named above, Lori Atkinson (Assistant Director), James Ashley, Tracy Barnes, Gary Bianchi, Molly Callaghan, Sara Cradic, Cynthia Grant, Robert Graves, Christopher Hayes, Erica Reyes, Mike Silver, Amie Steele, and Erik Wilkins-McKee made significant contributions to this report. Whistleblower Protection: Additional Actions Needed to Improve DOJ’s Handling of FBI Retaliation Complaints. GAO-15-112. Washington, D.C.: January 23, 2015. Whistleblower Protection Program: Opportunities Exist for OSHA and DOT to Strengthen Collaborative Mechanisms. GAO-14-286. Washington, D.C.: March 19, 2014. Whistleblower Protection: Actions Needed to Improve DOD’s Military Whistleblower Reprisal Program. GAO-12-362. Washington, D.C.: February 22, 2012. Tax Whistleblowers: Incomplete Data Hinders IRS’s Ability to Manage Claim Processing Time and Enhance External Communication. GAO-11-683. Washington, D.C.: August 10, 2011. Criminal Cartel Enforcement: Stakeholder Views on Impact of 2004 Antitrust Reform Are Mixed, but Support Whistleblower Protection. GAO-11-619. Washington, D.C.: July 25, 2011. Whistleblower Protection: Sustained Management Attention Needed to Address Long-Standing Program Weaknesses. GAO-10-722. Washington, D.C.: August 17, 2010. Defense Contracting Integrity: Opportunities Exist to Improve DOD’s Oversight of Contractor Ethics Programs. GAO-09-591. Washington, D.C.: September 22, 2009. Whistleblower Protection Program: Better Data and Improved Oversight Would Help Ensure Program Quality and Consistency. GAO-09-106. Washington, D.C.: January 27, 2009. Justice and Law Enforcement: Office of Special Counsel Needs to Follow Structured Life Cycle Management Practices for Its Case Tracking System. GAO-07-318R. Washington, D.C.: February 16, 2007. U.S. Office of Special Counsel: Strategy for Reducing Persistent Backlog of Cases Should Be Provided to Congress. GAO-04-36. Washington, D.C.: March 8, 2004. The Federal Workforce: Observations on Protections From Discrimination and Reprisal for Whistleblowing. GAO-01-715T. Washington, D.C.: May 9, 2001. Whistleblower Protection: VA Did Little Until Recently to Inform Employees About Their Rights. GAO/GGD-00-70. Washington, D.C.: April 14, 2000.
Whistleblowers play an important role in safeguarding the federal government against waste, fraud, and abuse. However, reporting wrongdoing outside the chain of command conflicts with military guidance, which emphasizes using the chain of command to resolve problems. Whistleblowers who make a report risk reprisal from their unit, such as being demoted or separated. DODIG is responsible for conducting and overseeing military whistleblower reprisal investigations. GAO was asked to examine DOD's oversight of military whistleblower reprisal investigations. This report examines the extent to which (1) DOD met statutory notification and internal timeliness requirements for completing military whistleblower reprisal investigations, (2) DODIG's whistleblower case management system supports oversight of reprisal investigations, and (3) DOD has processes to ensure oversight of service IG-conducted reprisal investigations. GAO analyzed DODIG and service IG data for cases closed in fiscal years 2013 and 2014 and cases open as of September 30, 2014, and reviewed a generalizable random sample of 124 military reprisal cases closed in fiscal year 2013. The Department of Defense (DOD) did not meet statutory military whistleblower reprisal 180-day notification requirements in about half of reprisal investigations closed in fiscal year 2013, and DOD's average investigation time for closed cases in fiscal years 2013 and 2014 was 526 days, almost three times DOD's internal 180-day requirement. In 2012, GAO made recommendations to improve investigation timeliness, and DOD has taken some actions to address those recommendations. However, based on a random sample of 124 cases, GAO estimated that there was no evidence that DOD sent the required notification letters in about 47 percent of the cases that DOD took longer than 180 days to close in fiscal year 2013. For cases in which DOD sent the required letter, GAO estimated that the median notification time was about 353 days after the servicemember filed the complaint, and on average the letters significantly underestimated the expected investigation completion date. DOD does not have a tool, such as an automated alert, to help ensure compliance with the statutory notification requirement to provide letters by 180 days informing servicemembers about delays in investigations. Without a tool for DOD to ensure that servicemembers receive reliable, accurate, and timely information about their investigations, servicemembers may be discouraged from reporting wrongdoing. DOD's Office of Inspector General's (DODIG) newly developed case management system, which it established to improve monitoring, is separate from the service IGs' systems, limiting DODIG's ability to provide oversight of all military reprisal investigations. GAO found that DODIG's system did not have a record of at least 22 percent of service-conducted reprisal investigations that were closed in fiscal years 2013 and 2014 and investigations open as of September 30, 2014. DODIG officials stated that they plan to expand DODIG's case management system to the service IGs by the end of fiscal year 2016 to improve DODIG's visibility over investigations. However, DODIG does not have an implementation plan for the expansion, and service IG officials stated that they have unique requirements that they would like to have incorporated into the system prior to expansion. Expanding the case management system to the service IGs without developing an implementation plan that, among other things, addresses the needs of both DODIG and the service IGs, puts DOD at risk of creating a system that will not strengthen its oversight of reprisal investigations. DOD does not have formalized processes to help ensure effective oversight of military whistleblower reprisal investigations conducted by service IGs. DODIG established an oversight investigator team to review service IG investigations, but it has provided oversight investigators with limited guidance on how to review or document service IG investigations. Specifically, GAO estimated that for about 45 percent of service investigations closed in fiscal year 2013, the oversight worksheets were missing narrative to demonstrate that the oversight investigator had thoroughly documented all case deficiencies or inconsistencies. GAO also found that these files did not include documentation of DOD's analysis of the effect of noted deficiencies on the investigation's outcome because DOD has provided limited instruction on how to review service IG cases. Without additional guidance on oversight review procedures and documentation requirements to formalize the oversight process, it will be difficult for DOD to ensure that reprisal complaints are investigated and documented consistently. GAO recommends that DOD develop a tool to help ensure compliance with the statutory notification requirement, develop an implementation plan for expanding DODIG's case management system, and issue guidance governing the oversight process, among other things. DOD concurred, but raised issues with GAO's presentation of its findings. GAO disagrees and addresses these issues in this report.
The U.S. surface and maritime transportation systems facilitate mobility through an extensive network of infrastructure and operators, as well as through the vehicles and vessels that permit passengers and freight to move within the systems. The systems include 3.9 million miles of public roads, 121,000 miles of major private railroad networks, and 25,000 miles of commercially navigable waterways. They also include over 500 major urban public transit operators in addition to numerous private transit operators, and more than 300 ports on the coasts, Great Lakes, and inland waterways. Maintaining the transportation system is critical to sustaining America’s economic growth. Efficient mobility systems are essential facilitators of economic development—cities could not exist and global trade could not occur without systems to transport people and goods. DOT has adopted improved mobility—to “shape an accessible, affordable, reliable transportation system for all people, goods, and regions”—as one of its strategic goals. To achieve this goal, it has identified several desired outcomes, including (1) improving the physical condition of the transportation system, (2) reducing transportation time from origin to destination, (3) increasing the reliability of trip times, (4) increasing access to transportation systems, and (5) reducing the cost of transportation services. The relative roles, responsibilities, and revenue sources of each sector involved in surface and maritime transportation activities—including the federal government, other levels of government, and the private sector— vary across modes. For public roads, ownership is divided among federal, state, and local governments—over 77 percent of the roads are owned by local governments; 20 percent are owned by the states, including most of the Interstate Highway System; and 3 percent are owned by the federal government. While the federal government owns few roads, it has played a major role in funding the nation’s highways. For example, from 1954 through 2001, the federal government invested over $370 billion (in constant 2001 dollars) in the Interstate Highway System. With the completion of the interstate system in the 1980s—and continuing with passage of the Intermodal Surface Transportation Efficiency Act of 1991 (ISTEA) and its successor legislation, TEA-21, in 1998—the federal government shifted its focus toward preserving and enhancing the capacity of the system. Under the Federal Aid Highway Program, the Federal Highway Administration (FHWA) provides funds to states to construct, improve, and maintain the interstate highway system and other parts of the U.S. road network and to replace and rehabilitate bridges. TEA-21 established, among other things, a mechanism for ensuring that the level of federal highway program funds distributed to the states would be more closely linked than before to the highway user tax receipts credited to the Highway Account of the Highway Trust Fund. These user taxes include excise taxes on motor fuels (gasoline, gasohol, diesel, and special fuels) and truck-related taxes on truck tires, sales of trucks and trailers, and the use of heavy vehicles. FHWA distributes highway program funds to the states through annual apportionments according to statutory formulas that consider a variety of factors including vehicles miles traveled on the interstate system, motor fuel usage by each state’s highway users, and other factors. The federal share for project funding is usually 80 percent but can vary among programs, road types, and states. State and local governments then “match” federal funds with funds from other sources, such as state or local revenues. While the federal government’s primary role has been to provide capital funding for the interstate system and other highway projects, state and local governments provide the bulk of the funding for public roads in the United States and are responsible for operating and maintaining all nonfederal roads including the interstate system. The sources of state highway revenues include user charges, such as taxes on motor fuels and motor vehicles and tolls; proceeds of bond issues; General Fund appropriations; and other taxes and investment income. The sources of local highway revenues include many of the user charges and other sources used by state governments, as well as property taxes and assessments. The U.S. transit system includes a variety of multiple-occupancy vehicle services designed to transport passengers on local and regional routes. Capital funding for transit came from the following sources in 2000: 47 percent of the total came from the federal government, 27 percent from transit agencies and other nongovernmental sources, 15 percent from local governments, and 11 percent from states. In that same year, the sources of operating funds for transit included passenger fares (36 percent of operating funds); state governments (20 percent); local governments (22 percent); other funds directly generated by transit agencies and local governments through taxes, advertising, and other sources (17 percent); and the federal government (5 percent). The Federal Transit Administration (FTA) provides financial assistance to states and local transit operators to develop new transit systems and improve, maintain, and operate existing systems. This assistance includes (1) formula grants to provide capital and operating assistance to urbanized and nonurbanized areas and to organizations that provide specialized transit services to the elderly and disabled persons; (2) competitive capital investment grants for constructing new fixed guideway systems and extensions to existing ones, modernizing fixed guideway systems, and investing in buses and bus-related facilities; (3) assistance for transit planning and research; and (4) grants to local governments and nonprofit organizations to connect low-income persons and welfare recipients to jobs and support services. Funding for federal transit programs is generally provided on an 80 percent/20 percent federal to local match basis. Federal support for transit projects comes from the Highway Trust Fund’s highway and transit accounts and from the General Fund of the U.S. Treasury. The respective roles of the public and private sector and the revenue sources vary for passenger as compared with freight railroads. With regard to passengers, the Rail Passenger Service Act of 1970 created Amtrak to provide intercity passenger rail service because existing railroads found such service unprofitable. Since its founding, Amtrak has rebuilt rail equipment and benefited from significant public investment in track and stations, especially in the Northeast corridor, which runs between Boston, Mass., and Washington, D.C. The federal government, through the Federal Railroad Administration (FRA), has provided Amtrak with $39 billion (in 2000 dollars) for capital and operating expenses from 1971 through 2002. Federal payments are a significant revenue source for Amtrak’s capital budget, but not its operating budget. In fiscal year 2001, for example, the sources of Amtrak’s capital funding were private sector debt financing (59 percent of total revenues), the federal government (36 percent), and state and local transportation agencies (5 percent). In that same year, the sources of funding for Amtrak’s operating budget were passenger fares (59 percent of total revenues), other business activities and commuter railroads (34 percent), and the federal government and state governments (7 percent). The role of the federal government in providing financial support to Amtrak is currently under review amid concerns about the corporation’s financial viability and discussions about the future direction of federal policy toward intercity rail service. With regard to freight, the private sector owns, operates, and provides almost all of the financing for freight railroads. Since the 1970s, the railroad industry has experienced many changes including deregulation and industry consolidation. Currently, the federal government plays a relatively small role in financing freight railroad infrastructure by offering some credit assistance to state and local governments and railroads for capital improvements. The U.S. maritime transportation system primarily consists of waterways, ports, the intermodal connections (e.g., inland rail and roadways) that permit passengers and cargo to reach marine facilities, and the vessels and vehicles that move cargo and people within the system. The maritime infrastructure is owned and operated by an aggregation of state and local agencies and private companies, with some federal funding provided by the Corps of Engineers, the U.S. Coast Guard, and DOT’s Maritime Administration. The Corps of Engineers provides funding for projects to deepen or otherwise improve navigation channels, maintain existing waterways, and construct and rehabilitate inland waterway infrastructure, primarily locks and dams. Funding for channel operations and maintenance generally comes from the Harbor Maintenance Trust Fund supported by a tax on imports, domestic commodities, and other types of port usage. The costs of deepening federal channels are shared by the federal government and nonfederal entities. The Inland Waterways Trust Fund, supported by a fuel tax, funds one-half of the inland and intra-coastal capital investments. Coast Guard funding promotes (1) mobility by providing aids to navigation, icebreaking services, bridge administration, and traffic management activities; (2) security through law enforcement and border control activities; and (3) safety through programs for prevention, response, and investigation. DOT’s Maritime Administration provides loan guarantees for the construction, reconstruction, or reconditioning of eligible export vessels and for shipyard modernization and improvement. It also subsidizes the operating costs of some companies that provide maritime services and provides technical assistance to state and local port authorities, terminal operators, the private maritime industry, and others on a variety of topics (e.g., port, intermodal, and advanced cargo handling technologies; environmental compliance; and planning, management, and operations of ports). Public sector spending (in 1999 dollars) has increased for public roads and transit between fiscal years 1991 and 1999, but stayed constant for waterways and decreased for rail, as shown in figure 1. Total public sector spending for public roads increased by 18.4 percent between fiscal years 1991 and 1999, from $80.6 billion to $95.5 billion (in 1999 dollars). Of those totals, the relative shares contributed by the federal government and by state and local governments remained constant from 1991 to 1999, as shown in figure 2. Contributions from state and local governments’ own funds—that is, independent of federal grants to state and local governments—were approximately 75 percent, with the federal government contributing the remaining 25 percent. The increases in total public spending for roads reflect federal programmatic spending increases resulting from ISTEA in 1992 and TEA-21 in 1998, as well as increases in total state and local spending. In particular, since the passage of TEA-21, the federal government’s contribution to total public expenditures on roads increased by 26.8 percent (in 1999 dollars) from $21.2 billion in fiscal year 1998 to $26.9 billion in fiscal year 2000, the latest year for which federal expenditure data are available. Although data on federal expenditures are not currently available for fiscal years after 2000, federal appropriations for fiscal years 2001 and 2002 reached $32.1 billion and $33.3 billion, respectively. Federal funding increases in those years largely resulted from adjustments required by the Revenue Aligned Budget Authority (RABA) provisions in TEA-21. Since TEA-21, the federal government has shifted its focus toward preserving and enhancing the capacity of public roads, while state and local government expenditures have been focused on maintaining and operating public roads. Appendix I contains additional information on the levels of capital investment and maintenance spending by the public sector. Total public spending for transit increased by 14.8 percent between fiscal years 1991 and 1999 to just over $29 billion (in 1999 dollars). This mainly reflects increases in state and local expenditures, as federal expenditures for transit actually decreased slightly over this period to $4.3 billion in 1999. In fiscal year 2000, however, federal spending on transit increased by 21.5 percent from $4.3 billion to $5.2 billion (in 1999 dollars). Although federal data on expenditures are not currently available for fiscal years after 2000, appropriations for fiscal years 2001 and 2002 reached $6.3 billion and $6.8 billion, respectively. State and local expenditures, independent of federal grants, increased to over $24 billion in 1999, accounting for over 85 percent of total public sector expenditures for transit, a share that has increased somewhat since 1991, as shown in figure 3. Public sector spending on ports and waterways has remained between $7.2 and $7.9 billion (in 1999 dollars), between fiscal years 1991 and 1999. This spending pattern reflects fairly steady levels of federal spending by the Corps of Engineers, the Coast Guard, and the Maritime Administration for water transportation expenditures. Expenditures by the Corps of Engineers and the Coast Guard comprise the bulk of federal spending for water transportation, and have remained at about $1.5 billion and $2 billion (in 1999 dollars) per year, respectively. State and local expenditures, however, increased by 27.7 percent, from $2.4 billion in fiscal year 1991 to $3.1 billion in fiscal year 1999, and accounted for about 41 percent of total public water transportation expenditures in fiscal year 1999, having grown from about 34 percent of the total in fiscal year 1991, as shown in figure 4. The public sector’s role in the funding of freight railroads is limited since the private sector owns, operates, and provides almost all of the financing for freight railroads. In addition, since public sector expenditures for commuter rail and subways are considered public transit expenditures, public expenditures discussed here for passenger rail are limited to funding for Amtrak. Federal support for Amtrak has fluctuated somewhat throughout the 1990s, but has dropped off substantially in recent years, with fiscal years 2001 and 2002 appropriations of $520 and $521 million, respectively. Sufficient data are not currently available to characterize trends in state and local governments’ spending for intercity passenger rail. The private sector plays an important role in the provision of transportation services in each mode. For example, while the private sector does not invest heavily in providing roads, it purchases and operates most of the vehicles for use on publicly provided roads. For freight rail, the private sector owns and operates most of the tracks as well as the freight trains that run on the tracks. In the maritime sector, many ports on the inland waterways are privately owned, as are freight vessels and towboats. Data on private sector expenditures on a national level are limited. However, available data show that private expenditures for transportation on roads, rail, and waterways rose throughout the 1990s. According to the U.S. Bureau of Economic Analysis’ Survey of Current Business, individuals and businesses spent about $397 billion in 2000 for the purchase of new cars, buses, trucks, and other motor vehicles, a 57-percent increase from 1993 levels (in 2000 dollars). In addition to the purchase of vehicles, the private sector also invests in and operates toll roads and lanes; however, data on these investments are not currently available on a national level. According to the Survey of Current Business, freight railroads and other businesses spent over $11 billion for railroad infrastructure and rail cars in 2000, a 66-percent increase from 1991 (in 2000 dollars). In addition, private sector investment on ships and boats more than doubled between 1991 and 2000, to about $3.7 billion (in 2000 dollars). However, private investment in waterways also includes port facilities for loading and unloading ships and for warehousing goods. Data on these investments are also currently not available on a national level. Federal projections show passenger and freight travel increasing over the next 10 years on all modes, due to population growth, increasing affluence, economic growth, and other factors. Passenger vehicle travel on public roads is expected to grow by 24.7 percent from 2000 to 2010. Passenger travel on transit systems is expected to increase by 17.2 percent over the same period. Intercity passenger rail ridership is expected to increase by 26 percent from 2001 to 2010. Finally, preliminary estimates by DOT also indicate that tons of freight moved on all surface and maritime modes—truck, rail, and water—are expected to increase by about 43 percent from 1998 through 2010, with the largest increase expected to be in tons moved by truck. However, several factors in the forecast methodologies limit their ability to capture the effects of changes in travel levels on the surface and maritime transportation systems as a whole (see app. II for more information about the travel forecast methodologies). For example, a key assumption underlying most of the national travel projections we obtained is that capacity will increase as levels of travel increase; that is, the projections are not limited by possible future constraints on capacity such as increasing congestion. On the other hand, if capacity does not increase, future travel levels may be lower than projected. In addition, differences in travel measurements hinder direct comparisons between modes and types of travel. For example, intercity highway travel is not differentiated from local travel in FHWA’s projections of travel on public roads, so projections of intercity highway travel cannot be directly compared to intercity passenger travel projections for other modes, such as rail. For freight travel, FHWA produces projections of future tonnage shipped on each mode; however, tonnage is only one measure of freight travel and does not capture important aspects of freight mobility, such as the distances over which freight moves or the value of the freight being moved. As shown in figure 5, vehicle miles traveled for passenger vehicles on public roads are projected to grow fairly steadily through 2010, by 24.7 percent over the 10-year period from 2000 through 2010, with an average annual increase of 2.2 percent. This is similar to the actual average annual rate of growth from 1991 to 2000, which was 2.5 percent. At the projected rate of growth, vehicle miles traveled would reach 3.2 trillion by 2010. The 20-year annual growth rate forecasts produced by individual states ranged from a low of 0.39 percent for Maine to a high of 3.43 percent for Utah. (See app. II for more detailed information on state forecasts.) In addition to passenger vehicles, trucks carrying freight contribute to the overall levels of travel on public roads. Vehicle miles traveled by freight trucks are also projected to increase by 2010, but such traffic makes up a relatively small share of total vehicle miles traveled. According to forecasts by FHWA, freight truck vehicle miles are expected to grow by 32.5 percent from 2000 to 2010, but will constitute less than 10 percent of total vehicle miles traveled nationwide in 2010. However, within certain corridors, trucks may account for a more substantial portion of total traffic. The projected average annual growth rate for truck travel is 2.9 percent for 2000 to 2010, compared to an actual average annual growth rate of 3.9 percent from 1991 to 2000. We discuss freight travel in more detail later in this report, after the discussion of passenger travel. For transit, FTA projects that the growth in passenger miles traveled between 2000 and 2010 will average 1.6 percent annually, for a total growth of 17.2 percent. Actual growth from 1991 through 2000 averaged 2.1 percent annually. (See fig. 6.) At the projected growth rate, annual passenger miles traveled on the nation’s transit systems would be approximately 52.9 billion by 2010. The transit forecast is a national weighted average and the individual forecasts upon which it is based vary widely by metropolitan area. For example, transit forecasts for specific urbanized areas range from a -0.05 percent average annual decrease in Philadelphia to a 3.56 percent average annual increase in San Diego. Both DOT and Amtrak project future increases in intercity passenger travel. Although automobiles dominate intercity travel, FHWA’s projections of vehicle miles traveled do not separately report long-distance travel in cars on public roads. After automobiles, airplanes and intercity buses are the next most used modes and intercity passenger rail is the least used. However, we do not report on air travel since it is outside the scope of this report, or on bus travel, because while FHWA projected increases in the number of miles traveled by all types of buses, we were unable to obtain specific projections of intercity ridership on buses. For intercity passenger rail, Amtrak predicts a cumulative increase in total ridership of 25.9 percent from 23.5 million passengers in 2001 to 29.6 million passengers in 2010, a contrast with the relatively flat ridership of recent years, which has remained between 20 and 23 million passengers per year (see app. II for further details about Amtrak’s projections). According to FHWA, FTA, and many of our panelists, a number of factors are likely to influence not only the amount of travel that will occur in the future, but also the modes travelers choose. First, the U.S. Census Bureau predicts that the country’s population will reach almost 300 million by 2010, which will result in more travelers on all modes. This population growth, and the areas in which it is expected to occur, could have a variety of effects on mode choices. In particular, the population growth that is expected in suburban areas could lead to a larger increase in travel by private vehicles than by transit because suburban areas generally have lower population densities than inner cities, and also have more dispersed travel patterns, making them harder to serve through conventional public transit. Rural areas are also expected to experience high rates of population growth and persons living there, like suburban residents, are more reliant on private vehicles and are not easily served by conventional public transit. While these demographic trends tend to decrease transit’s share of total passenger travel as compared to travel by private vehicle, the overall growth in population is expected to result in absolute increases in the level of travel on transit systems as well as by private vehicle. Another important factor that could affect mode choice is that the population aged 85 and over will increase 30 percent by 2010, according to data from the Census Bureau. The aging of the population might increase the market for demand-responsive transit services and improved road safety features, such as enhanced signage. Second, DOT officials and our panelists believed that the increasing affluence of the U.S. population would play a key role in future travel, both in overall levels and in the modes travelers choose. They noted that, as income rises, people tend to take more and longer trips, private vehicle ownership tends to increase, and public transit use generally decreases. Third, communication technology could affect local and intercity travel, but the direction and extent of the effect is uncertain. For example, telecommuting and videoconferencing are becoming more common, but are not expected to significantly replace face-to-face meetings unless the technology improves substantially. Finally, changes in the price (or perceived price), condition, and reliability of one modal choice as compared to another are also likely to affect levels of travel and mode choices. For example, changes in the petroleum market that affect fuel prices, or changes in government policy that affect the cost of driving or transit prices could result in shifts between personal vehicles and transit; however, it is difficult to predict the extent to which these changes would occur. Also, if road congestion increases, there could be a shift to transit or a decrease in overall travel. See appendix III for a more detailed discussion of these factors. Trucks move the majority of freight tonnage and are expected to continue moving the bulk of freight into the future. FHWA’s preliminary forecasts of international and domestic freight tonnage across all surface and maritime modes project that total freight moved will increase 43 percent, from 13.5 billion tons in 1998 to 19.3 billion tons in 2010. According to the forecasts, by 2010, 14.8 billion tons are projected to move by truck, a 47.6-percent increase; 3 billion tons by rail, a 31.8-percent increase; and 1.5 billion tons by water, a 26.6-percent increase, as shown in figure 7. Trucks are expected to remain the dominant mode, in terms of tonnage, because production of the commodities that typically move by truck, such as manufactured goods, is expected to grow faster than the main commodities moved by rail or on water, such as coal and grain. Tonnage is only one measure of freight travel and does not capture important aspects of freight mobility, such as the distances over which freight moves or the value of the freight being moved. Ton-miles measure the amount of freight moved as well as the distance over which it moves, and historically, rail has been the dominant mode in terms of ton-miles for domestic freight. In 1998, the base year of FHWA’s projections, domestic rail ton-miles totaled over 1.4 trillion, while intercity truck ton-miles totaled just over one trillion, and domestic ton-miles on the waterways totaled 672.8 billion. Air is the dominant mode in terms of value per ton according to DOT’s Transportation Statistics Annual Report 2000, at $51,000 per ton (in 1997 dollars). However, in terms of total value, trucks are the dominant mode. According to the Annual Report, trucks moved nearly $5 trillion (in 1997 dollars) in domestic goods, as opposed to $320 billion by rail and less than $100 billion by inland waterway. International freight is an increasingly important aspect of the U.S. economy. For international freight, water is the dominant mode in terms of tonnage. According to a DOT report, more than 95 percent of all overseas products and materials that enter or leave the country move through ports and waterways. More specifically, containers, which generally carry manufactured commodities such as consumer goods and electrical equipment and can be easily transferred to rail or truck, dominate in terms of value, accounting for 55 percent of total imports and exports, while only accounting for 12 percent of foreign tonnage. Containers are the fastest growing segment of the maritime sector. While FHWA predicts that total maritime freight tonnage will grow by 26.6 percent, the Corps of Engineers projects that volumes of freight moving in containers will increase by nearly 70 percent by 2010. In addition, ships designed to carry containers are the fastest growing segment of the maritime shipping fleet and are also increasing in size. Although freight vessels designed to carry bulk freight (e.g., coal, grain, or oil) are the largest sector of the freight vessel fleet, the number of containerships is increasing by 8.8 percent annually, which is double the growth rate of any other type of vessel according to the Corps of Engineers. Also, most of the overall capacity of the containership fleet is now found in larger containerships, with a capacity of more than 3,000 twenty-foot containers, and ships with capacities of three times that amount are currently on order. According to reports by the Transportation Research Board and the Bureau of Transportation Statistics, increasing international trade and economic growth are expected to influence volumes of future freight travel. In addition, the increasing value of cargo shipped and changes in policies affecting certain commodities can affect overall levels of freight traffic as well as the choice of mode for that traffic. The North American Free Trade Agreement has contributed to the increases in tonnage of imports by rail (24-percent increase) and by truck (20-percent increase), from Mexico and Canada between 1996 and 2000, while expanding trade with the Pacific Rim has increased maritime traffic at west coast container ports. With increasing affluence, economic growth often results in a greater volume of goods produced and consumed, leading to more freight moved, particularly higher-value cargo. In addition, the increasing value of cargo affects the modes on which that cargo is shipped. High-value cargo, such as electronics and office equipment, tends to be shipped by air or truck, while rail and barges generally carry lower-value bulk items like coal and grains. Changes in environmental regulations and other policies also affect the amount, cost, and mode choice for moving freight. For example, a change in demand for coal due to stricter environmental controls could affect rail and water transportation, the primary modes for shipping coal. See appendix III for a more detailed discussion of the factors that influence freight travel. To identify key mobility challenges and the strategies for addressing those challenges that are discussed later in this report, we relied upon the results of two panels of surface and maritime transportation experts that we convened in April 2002, as well as reports prepared by federal and other government agencies, academics, and industry groups. According to our expert panelists and other sources, with increasing passenger and freight travel, the surface and maritime transportation systems face a number of challenges that involve ensuring continued mobility while maintaining a balance with other social goals, such as environmental preservation. Ensuring continued mobility involves preventing congestion from overwhelming the transportation system and ensuring access to transportation for certain underserved populations. In particular, more travel can lead to growing congestion at bottlenecks and at peak travel times on public roads, transit systems, freight rail lines, and at freight hubs such as ports and borders where freight is transferred from one mode to another. In addition, settlement patterns and dependence on the automobile limit access to transportation systems for some elderly people and low-income households, and in rural areas where populations are expected to expand. Increasing travel levels can also negatively affect the environment and communities by increasing the levels of air, water, and noise pollution. Many panelists explained that congestion is generally growing for passenger and freight travel and will continue to increase at localized bottlenecks (places where the capacity of the transportation system is most limited), at peak travel times, and on all surface and maritime transportation modes to some extent. However, panelists pointed out that transportation systems as a whole have excess capacity and that communities may have different views on what constitutes congestion. Residents of small cities and towns may perceive significant congestion on their streets that may be considered insignificant to residents in major metropolitan areas. In addition, because of the relative nature of congestion, its severity is difficult to determine or to measure and while one measure may be appropriate for some situations, it may be inadequate for describing others. For local urban travel, a study by the Texas Transportation Institute showed that the amount of traffic experiencing congestion in peak travel periods doubled from 33 percent in 1982 to 66 percent in 2000 in the 75 metropolitan areas studied. In addition, the average time per day that roads were congested increased over this period, from about 4.5 hours in 1982 to about 7 hours in 2000. Increased road congestion can also affect public bus and other transit systems that operate on roads. Some transit systems are also experiencing increasing rail congestion at peak travel times. For example, the Washington Metropolitan Area Transit Authority’s (WMATA) recent studies on crowding found that rail travel demand has reached and, in some cases, exceeded scheduled capacity—an average of 140 passengers per car—during the peak morning and afternoon hours. Of the more than 200 peak morning rail trips that WMATA observed over a recent 6-month period, on average, 15 percent were considered “uncomfortably crowded” (125 to 149 passengers per car) and 8 percent had “crush loads” (150 or more passengers per car). In addition to local travel, concerns have been raised about how intercity and tourist travel interacts with local traffic in metropolitan areas and in smaller towns and rural areas, and how this interaction will evolve in the future. According to a report sponsored by the World Business Council for Sustainable Development, Mobility 2001, capacity problems for intercity travelers are generally not severe outside of large cities, except in certain heavily traveled corridors, such as the Northeast corridor, which links Washington, D.C., New York, and Boston. However, at the beginning and end of trips, intercity bus and automobile traffic contribute to and suffer from urban congestion. In addition, the study said that intercity travel may constitute a substantial proportion of total traffic passing through smaller towns and rural areas. Also, according to a GAO survey of all states, state officials are increasingly concerned about traffic volumes on interstate highways in rural areas, and high levels of rural congestion are expected in 18 states within 10 years. Congestion is also expected to increase on major freight transportation networks at specific bottlenecks, particularly where intermodal connections occur, and at peak travel times, according to the panelists. They expressed concern regarding interactions between freight and passenger travel and how increases in both types of travel will affect mobility in the future. Trucks contribute to congestion in metropolitan areas where they generally move on the same roads and highways as personal vehicles, particularly during peak periods of congestion. In addition, high demand for freight, particularly freight moved on trucks, exists in metropolitan areas where overall congestion tends to be the worst. With international trade an increasing part of the economy and with larger containerships being built, some panelists indicated that more pressure will be placed on the already congested road and rail connections to major U.S. seaports and at the border crossings with Canada and Mexico. For example, according to a DOT report, more than one-half of the ports responding to a 1997 survey of port access issues identified traffic impediments on local truck routes as the major infrastructure problem. According to one panelist from the freight rail industry, there is ample capacity on most of the freight rail network. However, railroads are beginning to experience more severe capacity constraints in particular heavily used corridors, such as the Northeast corridor, and within major metropolitan areas, especially where commuter and intercity passenger rail services share tracks with freight railroads. Capacity constraints at these bottlenecks are expected to worsen in the future. The panelist explained that congestion on some freight rail segments where the tracks are also used for passenger rail service—for which there is growing demand— reduces the ability of freight railroads to expand service on the existing tracks to meet the growing demand for freight movements on those segments. On the inland waterways, according to two panelists from that industry, there is sufficient capacity on most of the inland waterway network, although congestion is increasing at small, aging, and increasingly unreliable locks. According to the Corps of Engineers, the number of hours that locks were unavailable due to lock failures increased in recent years, from about 35,000 hours in 1991 to 55,000 hours in 1999, occurring primarily on the upper Mississippi and Illinois rivers. In addition, according to a Corps of Engineers analysis of congestion on the inland waterways, with expected growth in freight travel, 15 locks would exceed 80 percent of their capacity by 2020, as compared to 4 that had reached that level in 1999. According to our expert panelists, while increasing passenger and freight travel contribute to increasing congestion at bottlenecks and at peak travel times, other systemic factors contribute to congestion, including barriers to building enough capacity to accommodate growing levels of travel, challenges to effectively managing and operating transportation systems, and barriers in effectively managing how, and the extent to which, transportation systems are used. At bottlenecks and at peak travel times, there is insufficient capacity to accommodate the levels of traffic attempting to use the infrastructure. One reason for the insufficient capacity is that transportation infrastructure, which is generally publicly provided (with the major exception of freight railroads), can take a long time to plan and build, and it may not be possible to build fast enough to keep pace with increasing and shifting travel patterns. In addition, constructing new capacity is often costly and can conflict with other social goals such as environmental preservation and community maintenance. As a result, approval of projects to build new capacity, which requires environmental impact statements and community outreach, generally takes a long time, if it is obtained at all. In addition, a number of panelists indicated that funding and planning rigidities in the public institutions responsible for providing transportation infrastructure tend to promote one mode of transportation, rather than a set of balanced transportation choices. Focus on a single mode can result in difficulties dealing effectively with congestion. For example, as suburban expressways enable community developments to grow and move farther out from city centers, jobs and goods follow these developments. This results in increasing passenger and freight travel on the expressways, and a shifting of traffic flows that may not easily be accommodated by existing transportation choices. One panelist indicated that suburban expressways are among the least reliable in terms of travel times because, if congestion occurs, there are fewer feasible alternative routes or modes of transportation. In addition, some bottlenecks occur where modes connect, because funding is generally mode-specific, and congestion at these intermodal connections is not easily addressed. According to FHWA, public sector funding programs are generally focused on a primary mode of transportation, such as highways, or a primary purpose, such as improving air quality. This means that intermodal projects may require a broader range of funding than might be available under a single program. Panelists also noted that the types of congestion problems that are expected to worsen in the future involve interactions between long- distance and local traffic and between passengers and freight, and existing institutions may not have the capacity or the authority to address them. For example, some local bottlenecks may hinder traffic that has regional or national significance, such as national freight flows from major coastal ports, or can affect the economies and traffic in more than one state. Current state and local planning organizations may have difficulty considering all the costs and benefits related to national or international traffic flows that affect other jurisdictions as well as their own. The concept of capacity is broader than just the physical characteristics of the transportation network (e.g., the number of lane-miles of road). The capacity of transportation systems is also determined by how well they are managed and operated (particularly publicly owned and operated systems), and how the use of those systems is managed. Many factors related to the management and operation of transportation systems can contribute to increasing congestion. Many panelists said that congestion on highways was in part due to poor management of traffic flows on the connectors between highways and poor management in clearing roads that are blocked due to accidents, inclement weather, or construction. For example, in the 75 metropolitan areas studied by the Texas Transportation Institute, 54 percent of annual vehicle delays in 2000 were due to incidents such as breakdowns or crashes. In addition, the Oak Ridge National Laboratory reported that, nationwide, significant delays are caused by work zones on highways; poorly timed traffic signals; and snow, ice, and fog. In addition, according to a number of panelists, congestion on transportation systems is also in part due to inefficient pricing of the infrastructure because users—whether they are drivers on a highway or barge operators moving through a lock—do not pay the full costs they impose on the system and on other users for their use of the system. They further argued that if travelers and freight carriers had to pay a higher cost for using transportation systems during peak periods to reflect the full costs they impose, they would have an incentive to avoid or reschedule some trips and to load vehicles more fully, resulting in less congestion. Congestion affects travel times and the reliability of transportation systems. As discussed earlier in this report, the Texas Transportation Institute found that 66 percent of peak period travel on roadways was congested in 2000, compared to 33 percent in 1982 in the 75 metropolitan areas studied. According to the study, this means that two of every three vehicles experience congestion in their morning or evening commute. In the aggregate, congestion results in thousands of hours of delay every day, which can translate into costs such as lost productivity and increased fuel consumption. In addition, a decrease in travel reliability imposes costs on the traveler in terms of arriving late to work or for other appointments, and in raising the cost of moving goods resulting in higher prices for consumers. Some panelists noted that congestion, in some sense, reflects full use of transportation infrastructure, and is therefore not a problem. In addition, they explained that travelers adjust to congestion and adapt their travel routes and times, as well as housing and work choices, to avoid congestion. For example, according to the Transportation Statistics Annual Report 2000, median commute times increased about 2 minutes between 1985 and 1999, despite increases in the percentage of people driving to work alone and the average commuting distance. For freight travel, one panelist made a similar argument, citing that transportation costs related to managing business operations have decreased as a percentage of gross national product, indicating that producers and manufacturers adjust to transportation supply, by switching modes or altering delivery schedules to avoid delays and resulting cost increases. However, the Mobility 2001 report describes these adaptations by individuals and businesses as economic inefficiencies that can be very costly. According to the report, increasing congestion can cause avoidance of a substantial number of trips resulting in a corresponding loss of the benefits of those trips. In addition to negative economic effects, travelers’ adaptation to congested conditions can also have a number of negative social effects on other people. For example, according to researchers from the Texas Transportation Institute, traffic cutting through neighborhoods to avoid congestion can cause community disruptions and “road rage” can be partly attributed to increasing congestion. The FHWA and FTA’s 1999 Conditions and Performance report states that significant accessibility barriers persist for some elderly people and low- income households. In addition, several panelists stated that rural populations also face accessibility difficulties. According to the Conditions and Performance report, the elderly have different mobility challenges than other populations because they are less likely to have drivers’ licenses, have more serious health problems, and may require special services and facilities. According to 1995 data, 45 percent of women and 16 percent of men over age 75 did not have drivers’ licenses, which may limit their ability to travel by car. Many of the elderly also may have difficulty using public transportation due to physical ailments. People who cannot drive themselves tend to rely on family, other caregivers, or friends to drive them, or find alternative means of transportation. As a result, according to the 1999 Conditions and Performance report and a 1998 report about mobility for older drivers, they experience increased waiting times, uncertainty, and inconvenience, and they are required to do more advance trip planning. These factors can lead to fewer trips taken for necessary business and for recreation, as well as restrictions on times and places that health care can be obtained. Access to more flexible, demand-responsive forms of transit could enhance the mobility of the elderly, particularly in rural areas, which are difficult to serve through transit systems; however, some barriers to providing these types of services exist. For example, according to one of our panelists, some paratransit services are not permitted to carry able-bodied people, even if those people are on the route and are willing to pay for the service. As the elderly population increases over the next 10 years, issues pertaining to access are expected to become more prominent in society. Lower income levels can also be a significant barrier to transportation access. The cost of purchasing, insuring, and maintaining a car is prohibitive to some households, and 26 percent of low-income households do not own a car, compared with 4 percent of other households, according to the 1999 Conditions and Performance report. Among all low-income households, about 8 percent of trips are made in cars that are owned by others as compared to 1 percent for other income groups. Furthermore, the same uncertainties and inconveniences apply to this group as to the elderly regarding relying on others for transportation. Transportation access is important for employment opportunities to help increase income, yet this access is not always available. This is because growth in employment opportunities tends to occur in the suburbs and outlying areas, while many low-income populations are concentrated in the inner cities or in rural areas. In case studies of access to jobs for low-income populations, FTA researchers found that transportation barriers to job access included gaps in transit service, lack of knowledge of where transit services are provided, and high transportation costs resulting from multiple transfers and long distances traveled. Another problem they noted was the difficulty in coordinating certain types of work shifts with the availability of public transportation service. Without sufficient access to jobs, families face more obstacles to achieving the goal of independence from government assistance. Limited transportation access can also reduce opportunities for affordable housing and restrict choices for shopping and other services. Rural populations, which according to the 2000 Census grew by 10 percent over the last 10 years, also face access problems. Access to some form of transportation is necessary to connect rural populations to jobs and other amenities in city centers or, increasingly, in the suburbs. The Mobility 2001 report states that automobiles offer greater flexibility in schedule and choice of destinations than other modes of transportation, and often also provide shorter travel times with lower out-of-pocket costs. The report also notes that conventional transit systems are best equipped to serve high levels of travel demand that is concentrated in a relatively limited area or along well-defined corridors, such as inner cities and corridors between those areas and suburbs. Trips by rural residents tend to be long due to low population densities and the relative isolation of small communities. Therefore, transportation can be a challenge to provide in rural areas, especially for persons without access to private automobiles. A report prepared for the FTA in 2001 found that 1 in 13 rural residents lives in a household without a personal vehicle. In addition, the elderly made 31 percent of all rural transit trips in 2000 and persons with disabilities made 23 percent. However, according to a report by the Coordinating Council on Access and Mobility, while almost 60 percent of all nonmetropolitan counties had some public transportation services in 2000, many of these operations were small and offered services to limited geographic areas during limited times. While ISTEA and TEA-21 provided funds aimed at mitigating adverse effects of transportation, concerns persist about such effects on the environment and communities. As a result of the negative consequences of transportation, tradeoffs must be made between facilitating increased mobility and giving due regard to environmental and other social goals. For example, transportation vehicles are major sources of local, urban, and regional air pollution because they depend on fossil fuels to operate. Emissions from vehicles include sulfur dioxide, lead, carbon monoxide, volatile organic compounds, particulate matter, and nitrous oxides. In addition, the emission of greenhouse gases such as carbon dioxide, methane, and nitrous oxide are increasing and greenhouse gases have been linked to reduction in atmospheric ozone and climate changes. According to Mobility 2001, improved technologies can help reduce per-vehicle emissions, but the increasing numbers of vehicles traveling and the total miles traveled may offset these gains. In addition, congested conditions on highways tend to exacerbate the problem because extra fuel is consumed due to increased acceleration, deceleration, and idling. Vehicle emissions in congested areas can trigger respiratory and other illnesses, and runoff from impervious surfaces can carry lawn chemicals and other pollutants into lakes, streams, and rivers, thus threatening aquatic environments. Freight transportation also has significant environmental effects. Trucks are significant contributors to air pollution. According to the American Trucking Association, trucks were responsible for 18.5 percent of nitrous oxide emissions and 27.5 percent of other particulate emissions from mobile sources in the United States. The Mobility 2001 report states that freight trains also contribute to emissions of hydrocarbons, carbon monoxide, and nitrous oxide, although generally at levels considerably lower than trucks. In addition, while large shipping vessels are more energy efficient than trucks or trains, they are also major sources of nitrogen, sulfur dioxide, and diesel particulate emissions. According to the International Maritime Organization, ocean shipping is responsible for 22 percent of the wastes dumped into the sea on an annual basis. Barges moving freight on the inland waterway system are among the most energy efficient forms of freight transportation, contributing relatively lower amounts of noxious emissions compared with trucks and freight trains, according to the Corps of Engineers. However, the dredging and damming required to make rivers and harbors navigable can cause significant disruption to ecosystems. Noise pollution is another factor exacerbated by increasing levels of transportation. While FHWA, FTA, and many cities have established criteria for different land uses close to highways and rail lines to protect against physically damaging noise levels, average noise levels caused by road traffic in some areas can still have adverse consequences on people’s hearing. In addition, several studies have found that residential property values decrease as average noise levels rise above a certain threshold. Freight also contributes to noise pollution. According to Mobility 2001, shipping is the largest source of low-frequency, underwater noise, which may have adverse effects on marine life, although these effects are not yet fully understood. These noise levels are particularly serious on highly trafficked shipping routes. In addition, dredging also contributes to noise pollution. Growing awareness of the environmental and social costs of transportation projects is making it more difficult to pursue major transportation improvements. According to a number of panelists, the difficulty in quantifying and measuring the costs and benefits of increased mobility also hinders the ability of transportation planners to make a strong case to local decisionmakers for mobility improvements. In addition, transportation planning and funding is mode-specific and oriented toward passenger travel, which hinders transportation planners’ ability to recognize systemwide and multi-modal strategies for addressing mobility needs and other social concerns. The panelists presented numerous approaches for addressing the types of challenges discussed throughout this report, but they emphasized that no single strategy would be sufficient. From these discussions and our other research, we have identified three key strategies that may aid transportation decisionmakers at all levels of government in addressing mobility challenges and the institutional barriers that contribute to them. These strategies include the following: 1. Focus on the entire surface and maritime transportation system rather than on specific modes or types of travel to achieve desired mobility outcomes. A systemwide approach to transportation planning and funding, as opposed to focus on a single mode or type of travel, could improve focus on outcomes related to customer or community needs. 2. Use a full range of tools to achieve those desired outcomes. Controlling congestion and improving access will require a strategic mix of construction, corrective and preventive maintenance, rehabilitation, operations and system management, and managing system use through pricing and other techniques. 3. Provide more options for financing mobility improvements and consider additional sources of revenue. Targeting financing to transportation projects that will achieve desired mobility outcomes might require more options for raising and distributing funds for surface and maritime transportation. However, using revenue sources that are not directly tied to the use of transportation systems could allow decisionmakers to bypass transportation planning requirements which, in turn, could limit the ability of transportation agencies to focus on and achieve desired outcomes. Some panelists said that mobility should be viewed on a systemwide basis across all modes and types of travel. Addressing the types of mobility challenges discussed earlier in this report can require a scope beyond a local jurisdiction or a state line and across more than one mode or type of travel. For example, congestion challenges often occur where modes connect or should connect—such as ports or freight hubs where freight is transferred from one mode to another, or airports that passengers need to access by car, bus, or rail. These connections require coordination of more than one mode of transportation and cooperation among multiple transportation providers and planners, such as port authorities, metropolitan planning organizations (MPO), and private freight railroads. Some panelists therefore advocated shifting the focus of government transportation agencies at the federal, state, and local levels to consider all modes and types of travel in addressing mobility challenges—as opposed to focusing on a specific mode or type of travel in planning and implementing mobility improvements. Some panelists said that current transportation planning institutions, such as state transportation departments, MPOs, or Corps of Engineers regional offices, may not have sufficient expertise, or in some cases, authority to effectively identify and implement mobility improvements across modes or types of travel. They suggested that transportation planning by all entities focus more closely on regional issues and highlighted the importance of cooperation and coordination among modal agencies at the federal, state, and local level, between public and private transportation providers, and between transportation planning organizations and other government and community agencies to address transportation issues. For example, several panelists said that the Alameda Corridor in Los Angeles is a good example of successful cooperation and coordination among agencies. This corridor is designed to improve freight mobility for cargo coming into the ports of Los Angeles and Long Beach and out to the rest of the country. Planning, financing, and building this corridor required cooperation among private railroads, the local port authorities, the cities of Los Angeles and Long Beach, community groups along the entire corridor, the state of California, and the federal government. Several panelists said that a greater understanding of the full life-cycle costs and benefits of various mobility improvements is needed to take a more systemwide approach to transportation planning and funding. The panelists said the cost-benefit frameworks that transportation agencies currently use to evaluate various transportation projects could be more comprehensive in considering a wider array of social and economic costs and benefits, recognizing transportation systems’ links to each other and to other social and financial systems. Many panelists advocated a systemwide, rather than mode-specific, approach to transportation planning and funding that could also improve focus on outcomes that users and communities desire from the transportation system. For example, one panelist described a performance oriented funding system, in which the federal government would first define certain national interests of the transportation system—such as maintaining the entire interstate highway system or identifying freight corridors of importance to the national economy—then set national performance standards for those systems that states and localities must meet. Federal funds would be distributed to those entities that are addressing national interests and meeting the established standards. Any federal funds remaining after meeting the performance standards could then be used for whatever transportation purpose the state or locality deems most appropriate to achieve state or local mobility goals. Another panelist expanded the notion of setting national performance standards to include a recognition of the interactions between transportation goals and local economic development and quality of life goals, and to allow localities to modify national performance goals given local conditions. For example, a national performance standard, such as average speeds of 45 miles per hour for highways, might be unattainable for some locations given local conditions, and might run contrary to other local goals related to economic development. Some panelists described several other types of systems that could focus on outcomes. For example, one panelist suggested a system in which federal support would reward those states or localities that apply federal money to gain efficiencies in their transportation systems, or tie transportation projects to land use and other local policies to achieve community and environmental goals, as well as mobility goals. Another panelist described a system in which different federal matching criteria for different types of expenditures might reflect federal priorities. For example, if infrastructure preservation became a higher national priority than building new capacity, matching requirements could be changed to a 50 percent federal share for building new physical capacity and an 80 percent federal share for preservation. Other panelists suggested that requiring state and local governments to pay for a larger share of transportation projects might provide them with incentives to invest in more cost-effective projects. If cost savings resulted, these entities might have more funds available to address other mobility challenges. Some of the panelists suggested reducing the federal match for projects in all modes to give states and localities more fiscal responsibility for projects they are planning. Other panelists also suggested that federal matching requirements should be equal for all modes to avoid creating incentives to pursue projects in one mode that might be less effective than projects in other modes. Many panelists emphasized that using a range of various tools to address mobility challenges may help control congestion and improve access. This involves a strategic mix of construction, corrective and preventive maintenance, rehabilitation, operations and system management, and managing system use through pricing or other techniques. Many of the panelists said that no one type of technique would be sufficient to address mobility challenges. Although these techniques are currently in use, panelists indicated that planners should more consistently consider a full range of techniques. Building additional infrastructure is perhaps the most familiar technique for addressing congestion and improving access to surface and maritime transportation. Several panelists expressed the view that although there is a lot of unused capacity in the transportation system, certain bottlenecks and key corridors require new infrastructure. However, building new infrastructure cannot completely eliminate congestion. For example, according to the Texas Transportation Institute, it would require at least twice the level of current road expansion to keep traffic congestion levels constant, if that were the only strategy pursued. In addition, while adding lanes may be a useful tool to deal with highway congestion for states with relatively low population densities, this option may not be as useful or possible for states with relatively high population densities—particularly in urban areas, where the ability to add lanes is limited due to a shortage of available space. Furthermore, investments in additional transportation capacity can stimulate increases in travel demand, sometimes leading to congestion and slower travel speeds on the new or improved infrastructure. Other panelists said that an emphasis on enhancing capacity from existing infrastructure through increased corrective and preventive maintenance and rehabilitation is an important supplement to, and sometimes a substitute for, building new infrastructure. In 1999, the President’s Commission to Study Capital Budgeting reported that, because infrastructure maintenance requires more rapid budgetary spending than new construction and has a lower visibility, it is less likely to be funded at a sufficient level. However, one panelist said that for public roads, every dollar spent on preventive maintenance when the roads are in good condition saves $4 to $5 over what would have to be spent to maintain roads in fair condition or $10 to maintain roads once they are in poor condition. Maintaining and rehabilitating transportation systems can improve the speed and reliability of passenger and freight travel, thereby optimizing capital investments. Better management and operation of existing surface and maritime transportation infrastructure is another technique for enhancing mobility advocated by some panelists. Improving management and operations may allow the existing transportation system to accommodate additional travel without having to add new infrastructure. For example, the Texas Transportation Institute reported that coordinating traffic signal timing with changing traffic conditions could improve flow on congested roadways. In addition, according to an FHWA survey, better management of work zones—which includes accelerating construction activities to minimize their effects on the public, coordinating planned and ongoing construction activities, and using more durable construction materials— can reduce traffic delays caused by work zones and improve traveler satisfaction. Also, according to one panelist, automating the operation of locks and dams on the inland waterways could reduce congestion at these bottlenecks. Another panelist, in an article that he authored, noted that shifting the focus of transportation planning from building capital facilities to an “operations mindset” will require a cultural shift in many transportation institutions, particularly in the public sector, so that the organizational structure, hierarchy, and rewards and incentives are all focused on improving transportation management and operations. He also commented on the need to improve performance measures related to operations and management so that both the quality and the reliability of transportation services are measured. Several panelists suggested that contracting out a greater portion of operations and maintenance activities could allow public transportation agencies to focus their attention on improving overall management and developing policies to address mobility challenges. This practice could involve outsourcing operations and maintenance to private entities through competitive bidding, as is currently done for roads in the United Kingdom. In addition, by relieving public agencies of these functions, contracting could reduce the cost of operating transportation infrastructure and improve the level of service for each dollar invested for publicly owned transportation systems, according to one panelist. Developing comprehensive strategies for reducing congestion caused by incidents is another way to improve management and operation of surface and maritime transportation modes. According to the Texas Transportation Institute, incidents such as traffic accidents and breakdowns cause significant delays on roadways. One panelist said that some local jurisdictions are developing common protocols for handling incidents that affect more than one mode and transportation agency, such as state transportation departments and state and local law enforcement, resulting in improved communications and coordination among police, firefighters, medical personnel, and operators of transportation systems. Examples of improvements to incident management include employing roving crews to quickly move accidents and other impediments off of roads and rail and implementing technological improvements that can help barges on the inland waterways navigate locks in inclement weather, thereby reducing delays on that system. Several panelists also suggested that increasing public sector investment in technologies—known as Intelligent Transportation Systems (ITS)—that are designed to enhance the safety, efficiency, and effectiveness of the transportation network, can serve as a way of increasing capacity and mobility without making major capital investments. DOT’s ITS program has two major areas of emphasis: (1) deploying and integrating intelligent infrastructure and (2) testing and evaluating intelligent vehicles. ITS includes technologies that improve traffic flow by adjusting signals, facilitating traffic flow at toll plazas, alerting emergency management services to the locations of crashes, increasing the efficiency of transit fare payment systems, and other actions. Appendix IV describes the different systems that are part of DOT’s ITS program. Other technological improvements suggested by panelists included increasing information available to users of the transportation system to help people avoid congested areas and to improve customer satisfaction with the system. For example, up-to-the-minute traffic updates posted on electronic road signs or over the Internet help give drivers the information necessary to make choices about when and where to travel. It was suggested that the federal government could play a key role in facilitating the development and sharing of such innovations through training programs and research centers, such as the National Cooperative Highway Research Program, the Transit Cooperative Research Program, and possible similar programs for waterborne transportation. However, panelists cautioned that the federal government might need to deal with some barriers to investing in technology development and implementation. One panelist said that there are few incentives for agencies to take risks on new technologies. If an agency improves its efficiency, it may result in the agency receiving reduced funding rather than being able to reinvest the savings. Finally, another approach to reducing congestion without making major capital investments is to use demand management techniques to reduce the number of vehicles traveling at the most congested times and on the most congested routes. For public roads, demand management generally means reducing the number of cars traveling on particularly congested routes toward downtown during the morning commuting period and away from downtown during the late afternoon commuting period. One panelist, in a book that he authored, said that “the most effective means of reducing peak-hour congestion would be to persuade solo drivers to share vehicles.” One type of demand management for travel on public roads is to make greater use of pricing incentives. In particular, many economists have proposed using congestion pricing that involves charging surcharges or tolls to drivers who choose to travel during peak periods when their use of the roads increases congestion. Economists generally believe that such surcharges or tolls enhance economic efficiency by making drivers take into account the external costs they impose on others in deciding when and where to drive. These costs include congestion, as well as pollution and other external effects. The goal of congestion pricing would be to charge a toll for travel during congested periods that would make the cost (including the toll) that a driver pays for such a trip equal or close to the total cost of that trip, including external costs. These surcharges could help reduce congestion by providing incentives for travelers to share rides, use transit, travel at less congested (generally off-peak) times and on less congested routes, or make other adjustments—and at the same time, generate more revenues that can be targeted to alleviating congestion in those specific corridors. According to a report issued by the Transportation Research Board, technologies that are currently used at some toll facilities to automatically charge users could also be used to electronically collect congestion surcharges without establishing additional toll booths that would cause delays. Peak-period pricing also has applicability for other modes of transportation. Amtrak and some transit systems use peak-period pricing, which gives travelers incentives to make their trips at less congested times. In addition to pricing incentives, other demand management techniques that encourage ride-sharing can be useful in reducing congestion. Ride- sharing can be encouraged by establishing carpool and vanpool staging areas, providing free or preferred parking for carpools and vanpools, subsidizing transit fares, and designating certain highway lanes as high occupancy vehicle (HOV) lanes that can only be used by vehicles with a specified number of people in them (two or more). HOV lanes can provide an incentive for sharing rides because they reduce the travel time for a group traveling together relative to the time required to travel alone. This incentive is likely to be particularly strong when the regular lanes are heavily congested. Several panelists also recommended use of high occupancy toll (HOT) lanes, which combine pricing techniques with the HOV concept. Experiments with HOT lanes, which allow lower occupancy vehicles or solo drivers to pay a fee to use HOV lanes during peak traffic periods, are currently taking place in California. HOT lanes can provide motorists with a choice: if they are in a hurry, they may elect to pay to have less delay and an improved level of service compared to the regular lanes. When HOT lanes run parallel to regular lanes, congestion in regular lanes may be reduced more than would be achieved by HOV lanes. Demand management techniques on roads, particularly those involving pricing, often provoke strong political opposition. Several panelists said that instituting charges to use roads that have been available “free” is particularly unpopular because many travelers believe that they have already paid for the roads through gasoline and other taxes and should not have to pay “twice.” Other concerns about congestion pricing include equity issues because of the potentially regressive nature of these charges (i.e., the surcharges constitute a larger portion of the earnings of lower income households and therefore impose a greater financial burden on them). In addition, some people find the concept of restricting lanes or roads to people who pay to use them to be elitist because that approach allows people who can afford to pay the tolls to avoid congestion that others must endure. Several of the panelists suggested that tolls might become more acceptable to the public if they were applied to new roads or lanes as a demonstration project so that the tolls’ effectiveness in reducing congestion and increasing commuter choices could be evaluated. Several panelists indicated that targeting the financing of transportation to achieving desired mobility outcomes, and addressing those segments of transportation systems that are most congested, would require more options for financing surface and maritime transportation projects than are currently available, and might also require more sources of revenue in the future. According to many panelists, the current system of financing surface and maritime transportation projects limits options for addressing mobility challenges. For example, several panelists said that separate funding for each mode at the federal, state, and local level can make it difficult to consider possible efficient and effective ways for enhancing mobility, and providing more flexibility in funding across modes could help address this limitation. In addition, some panelists argued that “earmarking” or designation by the Congress of federal funds for particular transportation projects bypasses traditional planning processes used to identify the highest priority projects, thus potentially limiting transportation agencies’ options for addressing the most severe mobility challenges. According to one panelist, bypassing transportation planning processes can also result in logical connections or interconnections between projects being overlooked. Several panelists acknowledged that the public sector could expand its financial support for alternative financing mechanisms to access new sources of capital and stimulate additional investment in surface and maritime transportation infrastructure. These mechanisms include both newly emerging and existing financing techniques such as providing credit assistance to state and local governments for capital projects and using tax policy to provide incentives to the private sector for investing in surface and maritime transportation infrastructure (see app. V for a description of alternative financing methods). The panelists emphasized, however, that these mechanisms currently provide only a small portion of the total funding that is needed for capital investment and are not, by themselves, a major strategy for addressing mobility challenges. Furthermore, they cautioned that some of these mechanisms, such as Grant Anticipation Revenue Vehicles, could create difficulties for state and local agencies to address future transportation problems, because agencies would be reliant on future federal revenues to repay the bonds. Many panelists stated that a possible future shortage of revenues presents a fundamental limitation to addressing mobility challenges. Some panelists said that, because of the increasing use of alternative fuels, revenues from the gas tax are expected to decrease in the future, possibly hindering the public sector’s ability to finance future transportation projects. In addition, one panelist explained that MPOs are required to produce financially constrained long-range plans, and the plans in the panelist’s organization indicate that future projections of revenue do not cover the rising costs of planned transportation projects. One method of raising revenue is for counties and other regional authorities to impose sales taxes for funding transportation projects. A number of counties have already passed such taxes and more are being considered nationwide. However, several panelists expressed concerns that this method might not be the best option for addressing mobility challenges. For example, one panelist stated that moving away from transportation user charges to sales taxes that are not directly tied to the use of transportation systems weakens the ties between transportation planning and finance. Counties and other authorities may be able to bypass traditional state and metropolitan planning processes because these sales taxes provide them with their own sources of funding for transportation. A number of panelists suggested increasing current federal fuel taxes to raise additional revenue for surface transportation projects. In contrast, other panelists argued that the federal gas tax could be reduced. They said that, under the current system, states are receiving most of the revenue raised by the federal gas tax within their state lines and therefore there is little need for the federal government to be involved in collecting this revenue, except for projects that affect more than one state or are of national significance. However, other panelists said that this might lead to a decrease in gas tax revenues available for transportation, because states may have incentives to use this revenue for purposes other than transportation or may not collect as much as is currently collected. Given that freight tonnage moved across all modes is expected to increase by 43 percent during the period from 1998 to 2010, new or increased taxes or other fees imposed on the freight sector could also help fund mobility improvements. For example, one panelist from the rail industry suggested modeling more projects on the Alameda Corridor in Los Angeles, where private rail freight carriers pay a fee to use infrastructure built with public financing. Another way to raise revenue for funding mobility improvements would be to increase taxes on freight trucking. According to FHWA, heavy trucks (weighing over 55,000 pounds) cause a disproportionate amount of damage to the nation’s highways and have not paid a corresponding share for the cost of pavement damage they cause. This situation will only be compounded by the large expected increases in freight tonnage moved by truck over the next 10 years. The Joint Committee on Taxation estimated that raising the ceiling on the tax paid by heavy vehicles to $1,900 could generate about $100 million per year. Another revenue raising strategy includes dedicating more of the revenues from taxes on alternative fuels, such as gasohol, to the Highway Trust Fund rather than to the U.S. Treasury’s General Fund, as currently happens. Finally, panelists also said that pricing strategies, mentioned earlier in this report as a tool to reduce congestion, are also possible additional sources of revenue for transportation purposes. We provided DOT, the Corps of Engineers, and Amtrak with draft copies of this report for their review and comment. We obtained oral comments from officials at DOT and the Corps of Engineers. These officials generally agreed with the report and provided technical comments that we incorporated as appropriate. In addition, officials from the Federal Railroad Administration within DOT commented that the report was timely and would be vital to the dialogue that occurs as the Congress considers the reauthorization of surface transportation legislation. Amtrak had no comments on the report. Our work was primarily performed at the headquarters of DOT and the Corps of Engineers (see app. VI for a detailed description of our scope and methodology). We conducted our work from September 2001 through August 2002 in accordance with generally accepted government auditing standards. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days after the date of this report. At that time, we will send copies of this report to the congressional committees with responsibilities for surface and maritime transportation programs; DOT officials, including the Secretary of Transportation, the administrators of the Federal Highway Administration, Federal Railroad Administration, Federal Transit Administration, and Maritime Administration, the Director of the Bureau of Transportation Statistics, and the Commandant of the U.S. Coast Guard; the Commander and Chief of Engineers, U.S. Army Corps of Engineers; the President of Amtrak, and the Director of the Office of Management and Budget. We will make copies available to others on request. This report will also be available on our home page at no charge at http://www.gao.gov. If you have any questions about this report, please contact me at [email protected] or Kate Siggerud at [email protected]. Alternatively, we can be reached at (202) 512-2834. GAO contacts and acknowledgments are listed in appendix VII. Comparing the proportion of public spending devoted to various purposes across modes is difficult due to differences in the level of public sector involvement and in the definition of what constitutes capital versus operations and maintenance expenses in each mode. For example, the operation of public roads is essentially a function of private citizens operating their own vehicles, while operations for mass transit includes spending for bus drivers and subway operators, among other items. In addition, maintenance expenditures can differ greatly from one mode to another in their definition and scope. For example, maintenance for a public road involves activities such as patching, filling potholes, and fixing signage, while maintenance for channels and harbors involves routine dredging of built up sediment and disposal or storage of the dredged material. Given these significant differences in scope, different modes classify and report on maintenance expenses in different ways. For public roads, capital expenditures (which includes new construction, resurfacing, rehabilitation, restoration, and reconstruction of roads) constituted about one-half of total annual public sector expenditures over the last 10 years, with small increases in recent years. Of total capital expenditures in fiscal year 2000, 52 percent was used for system preservation, such as resurfacing and rehabilitation, while 40 percent was used for construction of new roads and bridges and other system expansions. These percentages have fluctuated somewhat throughout the 1990s. However, as shown in figure 8, the percentage of capital outlays spent on system preservation expenses increased from 45 percent to 52 percent between fiscal years 1993 and 2000, while construction of new roads and bridges and other system expansions declined from 49 percent to 40 percent over the same period. For transit, capital expenditures accounted for about 26 percent of total annual public sector expenditures in 1999. The federal government spends more heavily on capital than on operations for transit. The federal share of capital expenditures fluctuated throughout the 1990s but in fiscal year 2000 stood at about 50 percent, the same as it was in fiscal year 1991. The federal share of total operating expenses declined from about 5 percent in fiscal year 1991 to about 2 percent in fiscal year 2000. Federal government support to Amtrak for operating expenses and capital expenditures has fluctuated throughout the 1990s. Annual operating grants fluctuated between $300 and $600 million and capital grants between $300 and $500 million. In addition to these grants, the Taxpayer Relief Act of 1997 provided Amtrak with $2.2 billion for capital and operating purposes in fiscal years 1998 and 1999. Federal support declined in fiscal years 2000 and 2001, however, with the federal government providing grants to Amtrak of $571 and $521 million, respectively. For water transportation, spending by the U.S. Army Corps of Engineers (Corps of Engineers) for construction of locks and dams for inland waterway navigation fell while expenditures for operations and maintenance remained at around $350 to $400 million, as shown in figure 9. By contrast, Corps of Engineers expenditures for the construction, operations, and maintenance of federal channels and harbors have increased over the past decade. During fiscal years 1991 through 2000, construction expenditures increased from $112 million to $252 million (in 2000 dollars), while operations and maintenance expenditures increased from $631 million to $671 million (in 2000 dollars). In addition to the Corps of Engineers, the U.S. Coast Guard and the Maritime Administration also spend significant amounts for water transportation, although these agencies have limited responsibility for construction or maintenance of water transportation infrastructure. Demographic factors and economic growth are the primary variables influencing national travel projections for both passenger and freight travel. However, the key assumption underlying most of these travel projections is that the capacity of the transportation system is unconstrained; that is, capacity is assumed to expand as needed in order to accommodate future traffic flows. As a result, national travel projections need to be used carefully in evaluating how capacity improvements or increasing congestion in one mode of transportation might affect travel across other modes and the entire transportation system. Future travel growth will be influenced by demographic factors. A travel forecast study conducted for the Federal Highway Administration (FHWA) used economic and demographic variables such as per capita income and population to project a 24.7 percent national cumulative increase in vehicle miles traveled for passenger vehicles on public roads between 2000 and 2010. The study estimated that for every 1-percent increase in per capita income or population, vehicle miles traveled would increase nearly 1 percent. This forecast is unconstrained, however, in that it does not consider whether increased congestion or fiscal constraints will allow travel to grow at the rates projected. In part to deal with this limitation, FHWA uses another model to forecast a range of future vehicle miles traveled based on differing levels of investment. These projections recognize that if additional road capacity is provided, more travel is expected to occur than if the capacity additions are not provided. If congestion on a facility increases, some travelers will respond by shifting to alternate modes or routes, or will forgo some trips entirely. These projections are not available at this time but will be included in the U.S. Department of Transportation’s (DOT) 2002 report to Congress entitled Status of the Nation’s Highways, Bridges, and Transit: Conditions and Performance. While it is clear that travelers choose between modes of travel for reasons of convenience and cost, among other things, none of the FHWA travel forecasts consider the effects of changes in levels of travel on other modes, such as transit or rail. FHWA officials said that they would like to have a data system that projects intermodal travel, but for now such a system does not exist. The models also cannot reflect the impact of major shocks on the system, such as natural disasters or the terrorist attacks of September 2001. The Federal Transit Administration (FTA) makes national-level forecasts for growth in transit passenger miles traveled by collecting 15- to 25-year forecasts developed by metropolitan planning organizations (MPO) in the 33 largest metropolitan areas in the country. FTA calculates a national weighted average using the MPO forecasts and regional averages. MPOs create their forecasts as part of their long-range planning process. Unlike the first forecast for road travel discussed above, the 1999 Conditions and Performance report stated that the MPO forecasts for vehicle miles traveled and passenger miles traveled incorporate the effects of actions that the MPOs are proposing to shape demand in their areas to attain air quality and other developmental goals. The MPO plans may include transit expansion, congestion pricing, parking constraints, capacity limits, and other local policy options. MPO forecasts also have to consider funding availability. Amtrak provided us with systemwide forecasts of ridership, which are based on assumed annual economic growth of between 1 and 1.5 percent, fare increases equal to the national inflation rate, and projected ridership increases on particular routes, including new or changing service on certain routes scheduled to come on line over the forecast period. For short-distance routes, Amtrak uses a model that estimates total travel over a route by any mode, based on economic and demographic growth. The model then estimates travel on each mode competing in the corridor based on cost and service factors in each mode. For long distance routes, Amtrak uses a different model that projects future rail ridership using variables that have been determined to influence past rail ridership, such as population, employment, travel time for rail, and level of service for rail. This model does not consider conditions on other competing modes. In forecasting growth in national freight travel, models developed by FHWA and the U.S. Army Corps of Engineers (Corps of Engineers) use growth in trade and the economy as key factors driving freight travel. Projected growth in each particular mode is determined by growth in the production of the specific mix of commodities that historically are shipped on that mode. Therefore, any projected shift in freight movement from one mode to another is due to projected changes in the mix of commodities, or projected changes in where goods are produced and consumed. Because current or future conditions and the capacity of the freight transportation system cannot be factored into the national forecasts, a number of factors—including growing congestion, as well as the benefits of specific projects that might relieve congestion—are not considered in the projections. In addition, future trends in other factors that affect shippers’ choices of freight modes—such as relative cost, time, or reliability—are not easily quantifiable and are also linked to each system’s capacity and the congestion on each system. As such, these factors are not included in FHWA’s or Corps of Engineers’ national forecasting models. Underlying the commodity forecasts used by FHWA and the Corps of Engineers are a number of standard macro-economic assumptions concerning primarily supply side factors, such as changes in the size of the labor force and real growth in exports due to trade liberalization. Changes in border, airport, and seaport security since September 11 may affect assumptions that are imbedded in these commodity forecasts. For example, increased delays and inspections at the border or at a port may create problems for shippers to meet just-in-time requirements, possibly resulting in a short-term shift to an alternative mode, or a limiting of trade. Although current national freight forecasts are not capacity-constrained, FHWA is developing a “Freight Analysis Framework” to provide alternative analyses, assessing certain capacity limitations. The main impediment to developing this capability is determining capacity on each mode. There are commonly accepted measures of road capacity that are being incorporated, but rail and waterway capacity is not as easily measured. FHWA provided us with state-level forecasts of total vehicle miles traveled on public roads from 2000 to 2010, derived from data in the Highway Performance Monitoring System (HPMS) sample data set. This data set contains state-reported data on average annual daily traffic for approximately 113,000 road segments nationwide. For each sample section, HPMS includes measures of average annual daily traffic for the reporting year and estimates of future traffic for a specified forecast year, which is generally 18 to 25 years after the reporting year. It should be noted that the HPMS sample data do not include sections on any roads classified as local roads or rural minor collectors. Because the individual HPMS segment forecasts come from the states, we do not know exactly what models were used to develop them. According to officials at FHWA, the only national guidance comes from the HPMS Field Manual, which says that future average annual daily traffic should come from a technically supportable state procedure or data from MPOs or other local sources. The manual also says that HPMS forecasts for urbanized areas should be consistent with those developed by the MPO at the functional system and urbanized area level. For both local and intercity passenger travel, population growth is expected to be one of the key factors driving overall travel levels. Where that growth will occur will likely have a large effect on travel patterns and mode choices. According to the U.S. Census Bureau, the U.S. population will grow to almost 300 million by 2010. Although this represents a slower growth rate than in the past, it would still add approximately 18.4 million people to the 2000 population, and will likely also substantially increase the number of vehicles on public roads as well as the number of passengers on transit and intercity rail. The Census Bureau reported that since 1990, the greatest population growth has been in the South and West. According to one panelist, these regions’ metropolitan areas traditionally have lower central city densities and higher suburban densities than the Midwest and East. These areas are therefore harder to serve through transit than metropolitan areas with higher population densities, where transit can be more feasible. However, according to some transportation experts, it may not be possible to build new transit infrastructure in these areas due to environmental or other concerns. The population growth that is expected in suburban areas could lead to a larger increase in travel by private vehicles than by transit because suburban areas generally have lower population densities than inner cities, and also have more dispersed travel patterns, making them less easy to serve through conventional public transit. Although overall population growth will likely be greatest in suburban parts of metropolitan areas, high rates of growth are also predicted for rural areas. As is the case in suburbs, these rural areas are difficult to serve with anything but private automobiles because of low population densities and geographical dispersion of travel patterns, so travel by private vehicle may increase. Immigration patterns are also expected to contribute to changes in travel levels, but the extent will depend on immigration policies. For example, according to a senior researcher with the American Public Transportation Association, higher rates of immigration tend to increase transit use. In addition to overall population growth, another demographic trend that will likely affect mode choices is the aging of the population. According to data from the U.S. Census Bureau, the number of people aged 55 and over is projected to increase 26 percent between 2001 and 2010. The most rapidly growing broad age group is expected to be the population aged 85 and older, which is projected to increase 30 percent by 2010. According to the Federal Highway Administration and Federal Transit Administration’s 1999 Conditions and Performance report, the elderly have different mobility issues than the nonelderly because they are less likely to have drivers’ licenses, have more serious health problems, and may require special services and facilities. According to a report prepared for the World Business Council for Sustainable Development (Mobility 2001), cars driven by the elderly will constitute an increasing proportion of traffic, especially in the suburbs and rural areas, where many elderly people tend to reside. Increases in the number of older drivers can pose safety problems, in that the elderly have a higher rate of crashes per mile driven than younger drivers, and that rate rises significantly after age 85. The Mobility 2001 report also says that the driver fatality rate of drivers over 75 years of age is higher than any other age group except teenagers. Growth of the elderly population may therefore increase the importance of providing demand-responsive transit services and improving signs on public roads to make them clearer and more visible. Along with population growth, the increasing affluence of the U.S. population is expected to play a key role in local and intercity passenger travel levels and in the modes travelers choose. The 1999 Conditions and Performance report states that rates of vehicle ownership are lower in low- income households, leading those households to rely more on transit systems. According to Federal Transit Administration (FTA) officials and Mobility 2001, transit use—particularly use of buses—generally decreases as income increases. Increasing affluence also influences intercity travel levels. The 1999 Conditions and Performance report says that people with high incomes take approximately 30 percent more trips than people with low incomes, and the trips tend to be longer. Long-distance travel for business and recreation increases with income. Also, as income increases, travel by faster modes, such as car and air, increases, and travel by intercity bus tends to decrease. Several participants in our surface and maritime transportation panels (see app. VI) also indicated that improvements in communication technology will likely affect the amount and mode of intercity travel, but the direction and extent of the effect is uncertain. One panelist said that there is no additional cost to communicating over greater distances, so communications will replace travel to some extent, particularly as technologies improve. However, two other panelists said that communication technology might increase travel by making the benefit of travel more certain. For example, the Internet can provide people with current and extensive information about vacation destinations, potentially increasing the desire to travel. According to Mobility 2001, it is unclear whether telecommunications technology will substitute for the physical transportation of people and goods. Telecommuting and teleconferencing are becoming more common, but technological improvements would have to be significant before they can substitute for actual presence at work or in face-to-face meetings. In addition, while home-based workers do not have to commute, they tend to travel approximately the same amount as traditional workers, but differ in how their travel is distributed among trip purposes. The terrorist attacks on the United States on September 11, 2001, are expected to have some effect on passenger travel levels and choices about which mode to use, but U.S. Department of Transportation (DOT) officials and participants in the panels did not believe the long-term changes would be significant, provided that no more attacks occur. Federal Highway Administration and Federal Railroad Administration officials speculated that increased delays in air travel due to stricter security procedures might shift some travel from air to other modes, such as car or rail, although they expected this effect to be negligible in the long term unless additional incidents occur. Finally, changes in the price (or perceived price), condition, and reliability of one modal choice as compared with another are also likely to affect levels of travel and mode choices. For example, changes in the petroleum market that affect fuel prices, or changes in government policy that affect the cost of driving or transit prices, could result in shifts between personal vehicles and transit; however, it is difficult to predict the extent to which these changes will occur. According to Mobility 2001, automobiles offer greater flexibility in schedule and choice of destinations than other modes of transportation, and often also provide shorter travel times with lower out-of-pocket costs. However, if heavy and unpredictable road congestion causes large variations in automobile travel time, there could be a shift to transit or a decrease in overall travel. According to several reports by DOT and transportation research organizations, increasing international trade, economic growth, the increasing value of cargo shipped, and changes in policies affecting certain commodities are expected to influence future volumes of freight travel and the choice of mode by which freight is shipped. Increasing international trade and national trade policies are expected to affect commodity flows, volumes, and mode choice. According to the Transportation Statistics Annual Report 2000, the globalization of businesses can shift production of goods sold in the United States to locations outside of the country, increasing total ton-miles and changing the average length of haul of shipments. This shift in production could also affect freight mode choice, with more commodities being shipped by multiple modes as distances increase. According to Mobility 2001, truck transportation tends to be cheaper, faster, and more energy efficient than rail and barges for shipping high-value cargo. However, as distances increase, rail and intermodal transportation (linking rail and truck travel) become more cost-efficient options. Various trade policies also affect freight flows and volumes. For example, the North American Free Trade Agreement has contributed to the increased volume of trade moving on rail and highways. According to data from the Bureau of Transportation Statistics’ Transborder Surface Freight Database, between 1996 and 2000, tonnage of imports by rail from Mexico and Canada increased by about 25 percent, and imports by truck increased 20 percent. In the maritime sector, expanding trade with the Pacific Rim increased traffic at west coast container ports. According to the Transportation Statistics Annual Report 2000, economic growth results in a greater volume of goods produced and consumed, leading to more freight moved. As the economy grows, disposable income per capita increases and individual purchasing power rises, which can cause businesses to ship more freight per capita. According to the report, freight ton-miles per capita increased more than 30 percent, from 10,600 in 1975 to 14,000 in 1999. The increasing value of cargo and the continuing shift toward a more service-oriented economy and more time-sensitive shipments has affected the volume of freight shipments and the choice of modes on which freight is shipped. According to the Transportation Statistics Annual Report 2000, there is a continuing shift toward production of high-value, low- weight products, which leads to changes in freight travel levels and mode choice. For example, it takes more ton-miles to ship $1,000 worth of steel than it does to ship $1,000 worth of cell phones. High-value cargo, such as electronics and office equipment, tends to be shipped by air or truck, while rail and barges generally carry lower-value bulk items, such as coal and grain. According to Mobility 2001, the growth of e-commerce and just-in- time inventory practices depend upon the ability to deliver goods quickly and efficiently. A report prepared for the National Cooperative Highway Research Program states that the effects of just-in-time inventory practices are to increase the number of individual shipments, decrease their length of haul, and increase the importance of on-time delivery. Both reports indicate that such practices may shift some freight from slower modes, such as rail, to faster modes, such as truck or air. In addition, the Mobility 2001 report states that as the demand for specialized goods and services grows, the demand for smaller, more specialized trucks increases. Items ordered from catalogs or on-line retailers are often delivered by specialized trucks. Policies affecting particular commodities can have a large impact on the freight industry. For example, policies concerning greenhouse gas emissions can affect the amount of coal mined and shipped. Because coal is a primary good shipped by rail and water, reduction in coal mining would have a significant effect on tonnage for those modes. Changes in the type of coal mined as a result of environmental policies—such as an increase in mining of low-sulfur coal—can also affect the regional patterns of shipments, resulting in greater ton-miles of coal shipped. Also, increasing emissions controls and clean fuel requirements may raise the cost of operating trucks and result in a shift of freight from truck to rail or barge. For example, according to Mobility 2001, recently released rules from the Environmental Protection Agency implementing more stringent controls for emissions from heavy-duty vehicles are predicted to increase the purchase price of a truck by $803. Other environmental regulations also affect the cost of shipping freight, as when controls on the disposal of material dredged from navigation channels increase the costs of expanding those channels. Policies regarding cargo security may also affect the flow of goods into and out of the United States. For example, several of our panelists indicated that implementing stricter security measures will increase the cost of shipping freight as companies invest in the personnel and technology required. Tighter security measures could also increase time necessary to clear cargo through Customs or other inspection stations. The U.S. Department of Transportation’s (DOT) program of Intelligent Transportation Systems (ITS) offers technology-based systems intended to improve the safety, efficiency, and effectiveness of the surface transportation system. The ITS program applies proven and emerging technologies—drawn from computer hardware and software systems, telecommunications, navigation, and other systems—to surface transportation. DOT’s ITS program has two areas of emphasis: (1) deploying and integrating intelligent infrastructure and (2) testing and evaluating intelligent vehicles. Under the first area of emphasis, the intelligent infrastructure program is composed of the family of technologies that can enhance operations in three types of infrastructure: (1) infrastructure in metropolitan areas, (2) infrastructure in rural areas, and (3) commercial vehicles. Under the ITS program, DOT provides grants to states to support ITS activities. In practice, the Congress has designated the locations and amounts of funding for ITS. DOT solicits the specific projects to be funded and ensures that those projects meet criteria established in the Transportation Equity Act for the 21st Century. Metropolitan intelligent transportation systems focus on deployment and integration of technologies in urban and suburban geographic areas to improve mobility. These systems include: Arterial management systems that automate the process of adjusting signals to optimize traffic flow along arterial roadways; Freeway management systems that provide information to motorists and detect problems whose resolution will increase capacity and minimize congestion resulting from accidents; Transit management systems that enable new ways of monitoring and maintaining transit fleets to increase operational efficiencies through advanced vehicle locating devices, equipment monitoring systems, and fleet management; Incident management systems that enable authorities to identify and respond to vehicle crashes or breakdowns with the most appropriate and timely emergency services, thereby minimizing recovery times; Electronic toll collection systems that provide drivers and transportation agencies with convenient and reliable automated transactions to improve traffic flow at toll plazas and increase the operational efficiency of toll collection; Electronic fare payment systems that use electronic communication, data processing, and data storage techniques in the process of fare collection and in subsequent recordkeeping and funds transfer; Highway-rail intersection systems that coordinate traffic signal operations and train movement and notify drivers of approaching trains using in-vehicle warning systems; Emergency management systems that enhance coordination to ensure the nearest and most appropriate emergency service units respond to a crash; Regional multimodal traveler information systems that provide road and transit information to travelers to enhance the effectiveness of trip planning and en-route alternatives; Information management systems that provide for the archiving of data generated by ITS devices to support planning and operations; and Integrated systems that are designed to deliver the optimal mix of services in response to transportation system demands. Rural Intelligent Transportation Systems are designed to deploy high potential technologies in rural environments to satisfy the needs of a diverse population of users and operators. DOT has established seven categories of rural intelligent transportation projects. They are as follows: Surface Transportation Weather and Winter Mobility - technologies that alert drivers to hazardous conditions and dangers, including wide-area information dissemination of site-specific safety advisories and warnings; Emergency Services - systems that improve emergency response to serious crashes in rural areas, including technologies that automatically mobilize the closest police, ambulances, or fire fighters in cases of collisions of other emergencies; Statewide/Regional Traveler Information Infrastructure – system components that provide information to travelers who are unfamiliar with the local rural area and the operators of transportation services; Rural Crash Prevention – technologies and systems that are directed at preventing crashes before they occur, as well as reducing crash severity; Rural Transit Mobility – services designed to improve the efficiency of rural transit services and their accessibility to rural residents; Rural Traffic Management – services designed to identify and implement multi-jurisdictional coordination, mobile facilities, and simple solutions for small communities and operations in areas where utilities may not be available; and Highway Operations and Maintenance – systems designed to leverage technologies that improve the ability of highway workers to maintain and operate rural roads. The Commercial Vehicle ITS program focuses on applying technologies to improve the safety and productivity of commercial vehicles and drivers, reduce commercial vehicles’ operations costs, and facilitate regulatory processes for the trucking industry and government agencies. This is primarily accomplished through the Commercial Vehicle Information Systems and Networks—a program that links existing federal, state, and motor carrier information systems so that all entities can share information and communicate with each other in a more timely and accurate manner. The second area of emphasis in DOT’s ITS program—testing and evaluating intelligent vehicles—is designed to foster improvements in the safety and mobility of vehicles. This component of the ITS program is meant to promote traffic safety by expediting the commercial availability of advanced vehicle control and safety systems in four classes of vehicles: (1) light vehicles, including passenger cars, light trucks, vans, and sport utility vehicles; (2) commercial vehicles, including heavy trucks and interstate buses; (3) transit vehicles, including all nonrail vehicles operated by transit agencies; and (4) specialty vehicles, including those used for emergency response, law enforcement, and highway maintenance. Transportation officials at all levels of government recognize that funding from traditional sources (i.e., state revenues and federal aid) does not always keep pace with demands for new, expanded, or improved surface and maritime transportation infrastructure. Accordingly, the U.S. Department of Transportation (DOT) has supported a broad spectrum of emerging or established alternative financing mechanisms that can be used to augment traditional funding sources, access new sources of capital and operating funds, and enable transportation providers to proceed with major projects sooner than they might otherwise. These mechanisms fall into several broad categories: (1) allowing states to pay debt financing costs with future anticipated federal highway funds, (2) providing federal credit assistance, and (3) establishing financing institutions at the state level. In addition, state, local, and regional governments engage in public/private partnerships to tap private sector resources for investment in transportation capital projects. The federal government helps subsidize public/private partnerships by providing them with tax exemptions. The federal government allows states to tap into Federal-aid highway funds to repay debt-financing costs associated with highway projects through the use of Grant Anticipation Revenue Vehicles (GARVEE). Under this program, states can pledge a share of future obligations of federal highway funds toward repayment of bond-related expenses, including a portion of the principal and interest payments, insurance costs, and other costs. A project must be approved by DOT’s Federal Highway Administration to be eligible for this type of assistance. The federal government also provides credit assistance in the form of loans, loan guarantees, and lines of credit for a variety of surface and maritime transportation programs, as follows: Under the Transportation Infrastructure Finance and Innovation Act of 1998 (TIFIA), the federal government provides direct loans, loan guarantees, and lines of credit aimed at leveraging federal funds to attract nonfederal coinvestment in infrastructure improvements. This program is designed to provide financing for highway, mass transit, rail, airport, and intermodal projects, including expansions of multi-state highway trade corridors; major rehabilitation and replacement of transit vehicles, facilities, and equipment; border crossing infrastructure; and other investments with regional and national benefits. Under the Rail Rehabilitation and Improvement Financing Program (RRIF), established by the Transportation Equity Act for the 21st Century (TEA-21) in 1998, the federal government is authorized to provide direct loans and loan guarantees for railroad capital improvements. This type of credit assistance is made available to state and local governments, government-sponsored authorities, railroads, corporations, or joint ventures that include at least one railroad. However, as of June 2002, no loans or loan guarantees had been granted under this program. Under Title XI of the Merchant Marine Act of 1936, known as the Federal Ship Financing Guarantees Program, the federal government provides for a full faith and credit guarantee of debt obligations issued by (1) U.S. or foreign shipowners for the purpose of financing or refinancing U.S. or eligible export vessels that are constructed, reconstructed, or reconditioned in U.S. shipyards; and (2) U.S. shipyards for the purpose of financing advanced shipbuilding technology. A third way that the federal government helps transportation providers finance capital projects is by supporting State Infrastructure Banks (SIB). SIBs are investment funds established at the state or regional level that can make loans and provide other types of credit assistance to public and private transportation project sponsors. Under this program, the federal government allows states to use federal grants as “seed” funds to finance capital investments in highway and transit construction projects. The federal government currently supports SIBs in 39 states. In addition to these alternative financing mechanisms directly supported by the federal government, state, local, and regional governments sometimes engage in public/private partnerships to tap private sector resources for investment in transportation capital projects. The federal government also helps subsidize public/private partnerships by providing them with tax subsidies. One such subsidy is specifically targeted towards investment in ground transportation facilities—the tax exemption for interest earned on state and local bonds that are used to finance high-speed rail facilities and government-owned docks, wharves, and other facilities. In addition, a Department of the Treasury study indicates that the rates of tax depreciation allowed for railroads, railroad equipment, ships, and boats are likely to provide some subsidy to investors in those assets. Partnerships between state and local governments and the private sector are formed for the purpose of sharing the risks, financing costs, and benefits of transportation projects. Such partnerships can be used to minimize cost by improving project quality, maintaining risk-management, improving efficiency, spurring innovation, and accessing expertise that may not be available within the agency. These partnerships can take many forms; some examples include: Partnerships formed to develop, finance, build, and operate new toll roads and other roadways; Joint development of transit assets whereby land and facilities that are owned by transit agencies are sold or leased to private firms and the proceeds are used for capital investment in, and operations of, transit systems; “Turnkey” contracts for transit construction projects whereby the contractor (1) accepts a lower price for the delivered product if the project is delayed or (2) receives a higher profit if the project is delivered earlier or under budget; and Cross-border leases that permit foreign investors to own assets used in the United States, lease them to an American entity, and receive tax benefits under the laws of their home country. This financing mechanism offers an “up front” cost savings to transit agencies that are acquiring vehicles or other assets from a foreign firm. Our work covered major modes of surface and maritime transportation for passengers and freight, including public roads, public transit, railways, and ports and inland waterways. To determine trends in public expenditures for surface and maritime transportation over the past 10 years, we relied on U.S. Department of Transportation (DOT) reports and databases that document annual spending levels in each mode of transportation. We analyzed trends in total public sector and federal expenditures across modes during the 10-year period covering fiscal years 1991 through 2000, and we compared the proportion of public expenditures devoted to capital activities versus operating and maintaining the existing infrastructure during that same time period. We adjusted the expenditure data to account for inflation using separate indexes for expenditures made by the federal government and state and local governments. We used price indexes from the Department of Commerce’s Bureau of Economic Analysis’ National Income and Products Accounts. To determine projected levels of freight and passenger travel over the next 10 years, we identified projections made by DOT’s modal administrations, the U.S. Army Corps of Engineers, and Amtrak for the period covering calendar years 2001 through 2010. We interviewed officials responsible for the projections and reviewed available documentation to identify the methodology used in preparing the projections and the key factors driving them. We also obtained data on past levels of freight and passenger travel, covering fiscal years 1991 through 2000, from DOT’s modal administrations, the U.S. Army Corps of Engineers, and Amtrak. We analyzed the factors driving the trends for three types of travel—local, intercity, and freight— that have important distinctions in the types of vehicles and modes used for the travel. To identify mobility challenges and strategies for addressing those challenges, we primarily relied upon expert opinion, as well as a review of pertinent literature. In particular, we convened two panels of surface and maritime transportation experts to identify mobility issues and gather views about alternative strategies for addressing the issues and challenges to implementing those strategies. We contracted with the National Academy of Sciences (NAS) and its Transportation Research Board (TRB) to provide technical assistance in identifying and scheduling the two panels that were held on April 1 and 3, 2002. TRB officials selected a total of 22 panelists with input from us, including a cross-section of representatives from all surface and maritime modes and from various occupations involved in transportation planning. In keeping with NAS policy, the panelists were invited to provide their individual views and the panels were not designed to build consensus on any of the issues discussed. We analyzed the content of all of the comments made by the panelists to identify common themes about key mobility challenges and strategies for addressing those challenges. Where applicable, we also identified the opposing points of view about the challenges and strategies. The names and backgrounds of the panelists are as follows. We also note that two of the panelists served as moderators for the sessions, Dr. Joseph M. Sussman of the Massachusetts Institute of Technology and Dr. Damian J. Kulash of the Eno Foundation, Inc. Benjamin J. Allen is Interim Vice President for External Affairs and Distinguished Professor of Business at Iowa State University. Dr. Allen serves on the editorial boards of the Transportation Journal and Transport Logistics, and he is currently Chair of the Committee for the Study of Freight Capacity for the Next Century at TRB. His expertise includes transportation regulation, resource allocation, income distribution, and managerial decisionmaking and his research has been published in numerous transportation journals. Daniel Brand is Vice President of Charles River Associates, Inc., in Boston, Mass. Mr. Brand has served as Undersecretary of the Massachusetts Department of Transportation, Associate Professor of City Planning at Harvard University, and Senior Lecturer in the Massachusetts Institute of Technology’s Civil Engineering Department. Mr. Brand edited Urban Transportation Innovation, coedited Urban Travel Demand Forecasting, and is the author of numerous monographs and articles on transportation. Jon E. Burkhardt is the Senior Study Director at Westat, Inc., in Rockville, Md. His expertise is in the transit needs of rural and small urban areas, in particular, the needs of the elderly population in such areas. He has directed studies on the ways in which advanced technology can aid rural public transit systems, the mobility challenges for older persons, and the economic impacts of rural public transportation. Sarah C. Campbell is the President of TransManagement, Inc., in Washington, D.C., where she advises transportation agencies at all levels of government, nonprofit organizations, and private foundations on transportation issues. Ms. Campbell is currently a member of the Executive Committee of the TRB. She was a founding director of the Surface Transportation Policy Project and currently serves as chairman of its board of directors. Christina S. Casgar is the Executive Director of the Foundation for Intermodal Research and Education in Greenbelt, Md. Ms. Casgar’s expertise is in transportation and logistics policies of federal, state, and local levels of government, particularly in issues involving port authorities. She has also worked with the TRB as an industry investigator to identify key issues and areas of research regarding the motor carrier industry. Anthony Downs is a Senior Fellow at the Brookings Institution. Mr. Downs’s research interests are in the areas of democracy, demographics, housing, metropolitan policy, real estate, real estate finance, “smart growth,” suburban sprawl, and urban policy. He is the author of New Visions for Metropolitan America (1994), Stuck in Traffic: Coping with Peak-Hour Traffic Congestion (1992), and several policy briefs published by the Brookings Institution. Thomas R. Hickey served until recently as the General Manager of the Port Authority Transit Corporation in Lindenwold, N.J. Mr. Hickey has 23 years of public transit experience, and he is a nationally recognized authority in the field of passenger rail operations and the design of intermodal facilities. Ronald F. Kirby is the Director of Transportation Planning at the Metropolitan Washington Council of Governments. Dr. Kirby is responsible for conducting long-range planning of the highway and public transportation system in the Washington, D.C., region, assessing the air quality implications of transportation plans and programs, implementing a regional ridesharing program, and participating in airport systems planning in the region. Prior to joining the Council of Governments, he conducted transportation studies for the Urban Institute and the World Bank. Damian J. Kulash is the President and Chief Executive Officer of the Eno Transportation Foundation, Inc., in Washington, D.C. Dr. Kulash established a series of forums at the Foundation addressing major issues affecting all transportation modes including economic returns on transportation investment, coordination of intermodal freight operations in Europe and the United States, and development of a U.S. transportation strategy that is compatible with national global climate change objectives. He has published numerous articles in transportation journals and directed studies at the Congressional Budget Office and the TRB. Charles A. Lave is a Professor of Economics (Emeritus) at the University of California, Irvine where he served as Chair of the Economics Department. Dr. Lave has been a visiting scholar at the Massachusetts Institute of Technology and Harvard University, and he served on the Board of Directors of the National Bureau of Economic Research from 1991 through 1997. He has published numerous articles on transportation pricing and other topics. Stephen Lockwood is Vice President of Parsons Corporation, an international firm that provides transportation planning, design, construction, engineering, and project management services. Mr. Lockwood is also a consultant to the American Association of State Highway and Transportation Officials (AASHTO), the Federal Highway Administration (FHWA), and other transportation organizations. Prior to joining Parsons, he served as Associate Administrator for Policy at FHWA. Timothy J. Lomax is a Research Engineer at the Texas Transportation Institute at Texas A&M University. Dr. Lomax has published extensively on urban mobility issues and he developed a methodology used to assess congestion levels and costs in major cities throughout the United States. He is currently conducting research, funded by nine state transportation departments, to improve mobility measuring capabilities. James R. McCarville is the Executive Director of the Port of Pittsburgh Commission. He also serves as the President of the trade association, Inland Rivers’ Ports and Terminals, Inc., and is a member of the Marine Transportation System National Advisory Council, a group sponsored by the U.S. Secretary of Transportation. Mr. McCarville previously served as a consultant to the governments of Brazil, Uruguay, and Mexico on matters of port organization, operational efficiency, and privatization. James W. McClellan is Senior Vice President for Strategic Planning at the Norfolk Southern Corporation in Norfolk, Va., where he previously held positions in corporate planning and development. Prior to joining Norfolk Southern, he served in various marketing and planning positions with the New York Central Railroad, DOT’s Federal Railroad Administration, and the Association of American Railroads. Michael D. Meyer is a Professor in the School of Civil and Environmental Engineering at the Georgia Institute of Technology and was the Chair of the school from 1995 to 2000. He previously served as Director of Transportation Planning for the state of Massachusetts. Dr. Meyer’s expertise includes transportation planning, public works economics and finance, public policy analysis, and environmental impact assessments. He has written over 120 technical articles and has authored or co-authored numerous texts on transportation planning and policy. William W. Millar is President of the American Public Transportation Association (APTA). Prior to joining APTA, he was executive director of the Port Authority of Allegheny County in Pittsburgh, Pa. Mr. Millar is a nationally recognized leader in public transit and has served on or as Chair of the executive committees of TRB, the Transit Development Corporation, APTA, and the Pennsylvania Association of Municipal Transportation Authorities. Alan E. Pisarski is an independent transportation consultant in Falls Church, Va., providing services to public and private sector clients in the United States and abroad in the areas of transport policy, travel behavior, and data analysis and development. He has served as an advisor to numerous transportation and statistics agencies and transportation trade associations. He has also conducted surface transportation reviews for AASHTO and FHWA. Craig E. Philip is President and Chief Executive Officer of the Ingram Barge Company in Nashville, Tenn. He has served in various professional and senior management capacities in the maritime, rail, and intermodal industries and has held adjunct faculty positions at Princeton University and Vanderbilt University. Dr. Philip serves on the Executive Committee of the American Waterways Operators Association, the Marine Transportation System National Advisory Council, and the National Academy of Sciences’ Marine Board, and he is immediate past Chairman of the National Waterways Conference. Arlee T. Reno is a consultant with Cambridge Systematics in Washington, D.C. Mr. Reno has expertise in performance-based planning and measurement, multimodal investment analysis, urban transportation costs, alternative tax sources, and revenue forecasting for highway agencies. He has conducted reviews for the FHWA, AASHTO, and numerous state transportation agencies. Joseph M. Sussman is the JR East Professor in the Department of Civil and Environmental Engineering and the Engineering Systems Division at the Massachusetts Institute of Technology. Dr. Sussman is the author of Introduction to Transportation Systems (2000) and specializes in transportation systems and institutions, regional strategic transportation planning, intercity freight and passenger rail, intelligent transportation systems, simulation and risk assessment methods, and complex systems and he has authored numerous publications in those areas. He has served as Chair of TRB committees and as the Chairman of its Executive Committee in 1994, and he serves on the Board of Directors of ITS America and ITS Massachusetts. Louis S. Thompson is a Railways Advisor for the World Bank where he consults on all of the Bank’s railway lending activities. Prior to joining the Bank, Mr. Thompson held a number of senior positions in DOT’s Federal Railroad Administration, including Acting Associate Administrator for Policy, Associate Administrator for Passenger and Freight Services, Associate Administrator for Intercity Services, and Director of the Northeast Corridor Improvement Project. He has also served as an economics and engineering consultant. Martin Wachs is the Director of the Institute of Transportation Studies at the University of California, Berkeley and he holds faculty appointments in the departments of City and Regional Planning and Civil and Environmental Engineering at the university. Dr. Wachs has published extensively in the areas of transportation planning and policy, especially as related to elderly populations, fare and subsidy policies, crime in public transit, ethics, and forecasting. He currently serves as Chairman of the TRB and has served on various transportation committees for the state of California. In addition to the above, Christine Bonham, Jay Cherlow, Helen DeSaulniers, Colin Fallon, Rita Grieco, Brandon Haller, David Hooper, Jessica Lucas, Sara Ann Moessbauer, Jobenia Odum, and Andrew Von Ah of GAO, as well as the experts identified in appendix VI, made key contributions to this report. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to daily E-mail alert for newly released products” under the GAO Reports heading.
The U.S. surface and maritime transportation systems include roads, mass transit systems, railroads, and ports and waterways. One of the major goals of these systems is to provide and enhance mobility, that is, the free flow of passengers and goods. Mobility provides people with access to goods, services, recreation, and jobs; provides businesses with access to materials, markets and people; and promotes the movement of personnel and material to meet national defense needs. During the past decade, total public sector spending increased for public roads and transit, remained constant for waterways, and decreased for rail. Passenger and freight travel are expected to increase over the next 10 years, according to Department of Transportation projections. Passenger vehicle travel on public roads is expected to grow by 24.7 percent from 2000 to 2010. Passenger travel on transit systems is expected to increase by 17.2 percent over the same period. Amtrak has estimated that intercity passenger rail ridership will increase by 25.9 percent from 2001 to 2010. The key factors behind increases in passenger travel, and the modes travelers choose, are expected to be population growth, the aging of the population, and rising affluence. According to GAO's expert panelists and other sources, with increasing passenger and freight travel, the surface and maritime transportation systems face a number of challenges that involve ensuring continued mobility while maintaining a balance with other social goals, such as environmental preservation. These challenges include (1) preventing congestion from overwhelming the transportation system, (2) ensuring access to transportation for certain undeserved populations, and (3) addressing the transportation system's negative effects on the environment and communities. There is no one solution for the mobility challenges facing the nation, and GAO's expert panelists indicated that numerous approaches are needed to address these challenges. Strategies included are to (1) focus on the entire surface and maritime transportation system rather than on specific modes and types of travel, (2) use a full range of tools to achieve desired mobility outcomes, and (3) provide more options for financing mobility improvements and consider additional sources of revenue.
Under the Clean Air Act, EPA regulates two primary types of air pollutants. The first category—the so-called “criteria pollutants” for which EPA has established air quality criteria that limit the allowable concentrations in the ambient air—includes carbon monoxide, ground- level ozone (smog), lead, nitrogen oxides, particulate matter, and sulfur dioxide. EPA sets these standards at a level it believes protects public health and the needs of sensitive populations such as asthmatics, children, and the elderly. EPA and the states use air quality monitoring to measure compliance with the standards and develop pollution control strategies to help bring areas with poor air quality into compliance. The second category consists of hazardous air pollutants (or “air toxics”) for which no ambient air quality standards exist, and includes 187 chemicals that cause a variety of adverse health effects, including cancer. A variety of sources emit one or more of these air toxics (see fig. 1). In 2002, mobile sources emitted 41 percent of all air toxics, small stationary sources emitted 30 percent, major stationary sources emitted 20 percent, and other sources, such as fires, emitted 9 percent, according to EPA’s most recent data. Table 1 identifies the most widely emitted air toxics, the primary sources of these pollutants, and some of the adverse health effects associated with exposure to these substances. It is important to note that the health risks posed by air toxics vary considerably. Thus, small quantities of more harmful pollutants can pose greater health threats than large quantities of less harmful pollutants. Prior to 1990, the Clean Air Act required EPA to list air toxics it deemed hazardous and then promulgate regulations for them. However, by 1990, EPA had regulated only seven such pollutants. In 1990, Congress dramatically changed the program. Instead of requiring EPA to develop ambient standards for air toxics as it does for the six criteria pollutants, the Clean Air Act Amendments of 1990 listed the air toxics to be controlled and directed EPA to control them by, among other things, (1) developing technology-based emissions limits (MACT standards) for major stationary sources, such as incinerators and chemical plants; (2) regulating emissions from smaller sources, such as dry cleaners and gas stations; and (3) evaluating the need for and feasibility of regulations from mobile sources, such as cars, and regulating these sources based on this evaluation. The standards for major stationary sources generally require the use of available control technologies to achieve emissions reductions without the explicit consideration of a chemical’s toxicity or potential risk. To develop MACT standards, the 1990 amendments directed EPA to group emissions points at industrial facilities into categories of similar sources and then develop regulations for each “source category.” Examples of source categories include cement manufacturing, hazardous waste combustion, and semiconductor manufacturing. The next step consisted of evaluating the level of emissions control achieved by the best-performing facilities in each source category and using this as the minimum level of control required throughout the entire source category. Additionally, the amendments required EPA to review the MACT standards every 8 years to evaluate any remaining, or residual, health risks from these sources and identify developments in control technologies. EPA has combined the residual risk assessments and technology reviews into a concurrent process. Thus, the agency simultaneously evaluates the remaining risks from each source category and the availability of new pollution control technologies. The risk assessment process seeks to estimate the cancer and other health risks faced by individuals exposed to toxic emissions. As shown in table 2, the four steps of risk assessment include hazard identification, dose-response assessment, exposure assessment, and risk characterization. The risk assessment process is limited by scientific uncertainty about the health effects associated with exposure to air toxics. Nonetheless, the Clean Air Act’s residual risk program seeks to determine whether the most exposed individuals face excess cancer risk of more than 1 in 1 million. In cases where estimated risks exceed this threshold, EPA develops a residual risk standard that seeks to provide an ample margin of safety for affected individuals. Figure 2 provides an overview of the regulatory process for major stationary sources of air toxics, including MACT standards and 8-year technology and residual risk reviews. In addition to requirements for major sources, the act required EPA to develop a comprehensive strategy to control emissions of air toxics in urban areas, including identifying at least 30 small stationary source categories that account for 90 percent of the risk from these sources, and issue regulations by November 2000. EPA has listed 70 small stationary source categories for regulation. The act also required EPA to assess the need for and feasibility of air toxics standards for motor vehicles and fuels, and, based on that assessment, issue regulations to control air toxics from motor vehicles and fuels. Table 3 summarizes the 453 actions required of EPA under the air toxics provisions of the 1990 amendments. Because these actions range in scope from developing MACT standards to issuing reports, they vary in their potential to reduce emissions. EPA’s Office of Air and Radiation has primary responsibility for completing air toxics actions required under the Clean Air Act. Within the Office of Air and Radiation, responsibility for implementing the air toxics requirements of the Act rests primarily with the Office of Air Quality Planning and Standards and, to a lesser extent, with the Office of Transportation and Air Quality. The responsibility for analyzing the health, economic, and other effects of individual air toxics programs also rests with these offices. The Office of Policy Analysis and Review supplements these program-specific analyses by conducting periodic assessments of the health, ecological, and economic effects of the overall Clean Air Act, including the air toxics provisions, and coordinating these studies as appropriate with other EPA offices. In conducting these broader studies, the Office of Policy Analysis and Review also works with the Advisory Council for Clean Air Act Compliance Analysis, an independent, multi- disciplinary panel of outside experts organized under the auspices of EPA’s Science Advisory Board. The agency’s Office of Research and Development performs scientific research on air toxics to support regulatory efforts. The Office of Enforcement and Compliance Assurance directs efforts to ensure compliance with air toxics requirements. In most cases, state and local air pollution control agencies implement the standards developed by EPA. Additionally, the act generally allows these agencies to impose more stringent requirements than the federal standards, although some states have enacted laws or rules prohibiting air pollution control agencies from adopting more stringent requirements. Nonetheless, some state and local programs have developed innovative air toxics programs. EPA has completed issuing emissions standards for major stationary sources but issued most of these standards late and has made limited progress toward completing the remaining air toxics requirements. In particular, EPA has made little progress and is behind schedule in completing residual risk and technology reviews and in issuing emissions standards for small stationary sources and mobile sources. EPA’s limited progress and program implementation challenges have resulted primarily from the program’s lower priority relative to other clean air programs. Furthermore, the agency lacks a program implementation strategy. Stakeholders we interviewed—including EPA, state and local agency officials, environmental groups, and industry representatives—provided additional perspective on EPA’s implementation of the air toxics program and highlighted data limitations and inadequate funding as major challenges. EPA has completed issuing the MACT standards for major stationary sources but has made limited progress in addressing requirements related to residual risk and technology reviews, and in issuing standards for small stationary sources and mobile sources. As a result of the limited progress in implementing these requirements, EPA has not reduced human health risks from air toxics to the extent and in the time frames envisioned in the act. Table 4 summarizes EPA’s overall progress in implementing air toxics requirements under the Clean Air Act. To meet the act’s requirements for major stationary sources, EPA had to identify a list of major source categories and then issue standards beginning in 1992, with all standards due by November 2000. In response, EPA identified 158 major source categories and issued 96 standards covering these categories between 1993 and 2004. Table 5 summarizes the timeliness of EPA’s MACT standards relative to the act’s deadlines. While the agency missed most of the deadlines, a senior EPA official said that issuing the 96 standards represented a major achievement and that the agency had never previously issued so many standards for one program in such a limited period of time. Because EPA issued most of the MACT standards well behind schedule, the residual risk and control technology reviews, which EPA is to complete 8 years after issuing each standard, have been pushed back commensurately, thereby delaying any additional public health protection that these reviews may provide. Specifically, instead of completing the initial residual risk assessments and technology reviews for all of the MACT standards by 2008 as specified by the act, EPA is not required to complete all of the initial reviews until 2012 because it issued many MACT standards behind schedule. For example, because EPA issued the MACT standard for industrial boilers in 2004 rather than 2000, as required, the residual risk assessment and technology review for this source category become due in 2012, almost 4 years later than the act’s intended timeline. Furthermore, EPA is behind schedule on the residual risk assessments and technology reviews. As of April 2006, EPA had finalized only five of these reviews, and all of these were late. Three additional reviews have court- ordered deadlines and will be completed by the end of 2006, according to EPA. The act required EPA to develop regulations for small stationary sources by November 2000. However, the agency has not met this schedule. In July 2000, EPA outlined its plans for issuing standards for small stationary sources in a report to Congress describing its strategy for reducing threats from air toxics in urban areas. This report identified 16 categories of small stationary sources that it described as “already subject to regulation” or “will be subject to regulation.” The report also identified 13 additional categories for which it planned to issue standards by 2004. In 2002, EPA expanded the list to include a total of 70 source categories. However, as of April 2006, EPA has issued standards for only 16 categories of sources, leaving standards for 54 source categories past due. Furthermore, the agency faces court-ordered deadlines to complete standards for all of the remaining categories of small stationary sources by June 15, 2009. The act also required EPA to study the need and feasibility of air toxics standards for motor vehicles and fuels and, based on the study, develop a regulation to control air toxics from motor vehicles and fuels by 1995. EPA completed the study in 1993 (about 11 months after the deadline) and, after missing the 1995 deadline for the regulation, faced a court-ordered consent decree to complete the regulation by 2001. To comply, EPA issued an initial rule in 2001 that stated that a second and final rule making would follow in 2004. The agency missed this deadline and eventually proposed a second rule in February 2006, with a final rule planned for February 2007. The proposed rule would limit the benzene content of gasoline and reduce toxic emissions from passenger vehicles and gas cans according to EPA. Finally, the act contained 31 requirements that do not fit into the categories discussed above, including reports to Congress and guidance for state and local programs. As of April 2006, EPA has met 29 of these requirements. One of the key areas where EPA has not taken action relates to the act’s requirement for the agency to periodically review and update, as appropriate, the list of air toxics. Officials responsible for the program said the agency does not proactively conduct such reviews and instead has adopted a reactive approach, whereby the agency responds to petitions filed by external stakeholders seeking to add or delete chemicals. EPA officials, citing insufficient resources to develop a more proactive approach, said that their efforts have focused on reviewing petitions for additions and deletions filed by external stakeholders. Since 1990, EPA has received one petition to list a new air toxic (diesel exhaust) and seven petitions to delist. The petition to list diesel exhaust is under review, and of the seven petitions to delist, three have been granted, two have been denied, and two are under review, according to EPA. Overall, EPA has not added any new chemicals to the list of regulated pollutants, but three chemicals and several substances from a listed group of chemicals have been removed. The agency’s consideration of diesel exhaust in response to an environmental group’s petition has taken more than 2 years, resulting in a lawsuit when the agency did not complete its review within 18 months, as required by the act. EPA and the environmental group reached an agreement in February 2006 that requires the agency to decide by June 2006 whether to list diesel exhaust as an air toxic. A 2004 report by the National Academies highlighted EPA’s lack of a process for reviewing new pollutants despite the estimated 300 chemicals that enter commerce each year. The report recommended that EPA “establish a more dynamic process for considering new pollutants.” To date, EPA has not addressed this recommendation, according to senior agency officials. Furthermore, a 2004 study published in the Journal of the Air & Waste Management Association screened 1,086 chemicals for potential addition to the list of regulated air toxics and found that 44 merited further consideration for addition to the list based on available toxicity and emissions data. Senior EPA air program officials said the agency’s progress in meeting the act’s air toxics requirements should be viewed within the context of limited funding for clean air programs and the agency’s need to focus its resources on the areas where it expects the greatest health-risk reductions. Scientific information on the health effects of air toxics is less comprehensive than that available for higher-priority clean air programs, such as those targeting smog and particulate matter. Additionally, several officials said that other regulatory and voluntary programs limit emissions of air toxics as a side benefit. EPA considers the air toxics program a lower priority than its three other major clean air programs—including those to address criteria pollutants, international environmental issues such as climate change, and indoor air quality issues such as exposure to radon gas—because senior officials in EPA’s Office of Air and Radiation believe these programs have more potential to reduce health risks. As shown in table 6, the percentage of funding for air toxics relative to all clean air programs ranged from 18 percent to 19 percent between 2000 and 2003, but declined to 15 percent in 2004 and 12 percent in 2005. However, the total dollar amounts (in inflation-adjusted 2005 dollars) devoted to air toxics increased each year between 2000 and 2004, with a decline in 2005. Within the air toxics program, EPA’s initial priority was to complete the MACT standards because the agency believed that this aspect of the program had the greatest potential to address risks from air toxics. Despite EPA placing a priority on issuing the MACT standards, the agency still fell behind schedule when it missed deadlines for the first round of standards (due in 1992) and has never caught up to the act’s implementation schedule. EPA officials said they missed some of the MACT deadlines because of technical challenges, including a lack of emissions data from affected source categories and the complexity of many of the regulated facilities. The missed deadlines led to lawsuits filed by external parties seeking compliance with the act’s implementation schedule, resulting in court-ordered deadlines for the agency to complete standards. Furthermore, senior EPA officials said these court-ordered deadlines largely drive the program’s agenda. In this way, EPA ceded control of the priority-setting process, and this problem is still evident. For example, a senior official responsible for the development of regulations said that the agency’s highest priority for the remaining requirements is addressing residual risk reviews and small stationary source standards with court-ordered deadlines. The lower priority of the air toxics program in general and the priority given to MACT standards within the program, as well as technical challenges, have caused delays in completing the residual risk and technology reviews, as well as standards for small stationary and mobile sources. Further, as shown in table 7, available EPA data indicate that small stationary and mobile sources in total have accounted for more emissions than major stationary sources in every emissions inventory completed since the 1990 amendments. Furthermore, the agency has estimated that benzene—a known carcinogen emitted primarily by mobile sources—accounts for about 25 percent of the cancer risk posed by air toxics across the nation. Benzene is also the only air toxic that, to date, EPA has determined poses sufficient risks to qualify as a “national cancer risk driver.” EPA developed air toxics emission inventories for 1993, 1996, 1999, and 2002. A large part of the 1993 baseline inventory is based on data obtained from 1990. For simplicity, and because EPA has traditionally referred to it as such, we refer to this data as the 1993 baseline inventory. EPA said it did not provide data for 1996 because the agency has not updated the information from that year for consistency with the methodology used for the 1993, 1999, and 2002 data. EPA expects that the proposed mobile source air toxics rule will reduce benzene emissions. In addition, a senior EPA air program official said that other regulations for mobile sources, including standards that affect gasoline formulations as well as programs addressing emissions from diesel engines, will also reduce emissions of air toxics as a side benefit. Nonetheless, mobile sources will continue to represent an area of significant opportunity to reduce emissions and related human health risks. Addressing the remaining requirements for residual risk standards and small stationary sources will require overcoming significant technical challenges. Regarding residual risk standards, the Clean Air Act’s requirement that EPA introduce a risk element into the regulatory decision-making process marks a departure from the approach the act used with MACT standards, which generally did not require EPA to take the inherent toxicity or health risks from pollutants into account. EPA officials said that conducting the residual risk assessments requires a large amount of data, much of which is difficult to obtain. For example, to adequately assess the human health risk posed by a particular source, EPA needs data on the health effects associated with each pollutant, the location of sources, distances between sources and affected populations, and the concentrations of emissions at different distances from facilities. Challenges in regulating small stationary sources center on difficulty in characterizing the large number of widely dispersed facilities such as industrial boilers, paint-stripping operations, and auto-body shops. In some cases, data do not exist on the number or location of facilities potentially subject to a regulation. Furthermore, unlike the large stationary sources affected by MACT standards, owners and operators of these sources have limited resources to implement regulations and will require extensive outreach and compliance assistance. EPA’s challenges in meeting the act’s remaining requirements are exacerbated by the lack of a management plan that identifies priorities and necessary resources. The agency’s overall strategic plan outlines the goals and targets for emissions and risk reduction across all clean air programs but does not specify priorities or necessary levels of funding for the air toxics program. Similarly, the agency’s budget requests provide limited information on the agency’s air toxic program activities or priorities. Furthermore, a senior EPA official said that the agency has not estimated how much funding the air toxics program needs to meet the act’s remaining requirements. Such information could assist Congress in making its appropriations decisions, enhance the program’s transparency to the public, and guide the agency in implementing the program. To better understand the challenges facing EPA’s air toxics program, we interviewed various stakeholders, including officials from EPA, industry and environmental groups, and state and local air pollution control agencies. Each respondent rated the extent to which nine specific issues posed a challenge to EPA in implementing the air toxics program, and we then averaged the responses within each stakeholder group. As shown, in table 8, the average response within each group identified at least one of seven different issues as a challenge to a large or very great extent. Although perceptions varied among the stakeholder groups, three issues emerged as primary challenges—the availability of reliable data to assess the benefits of regulating air toxics, the adequacy of program funding, and the program’s low priority relative to other clean air programs. As shown in the table, respondents from at least three of the four stakeholder groups we interviewed identified each of these challenges as significant. Several stakeholders identified linkages among the three primary challenges. For example, some stakeholders said that the problems with limited resources stemmed from the program’s low priority. In addition, some stakeholders said that the lack of information on the benefits of regulating air toxics reinforced the program’s low priority because the agency cannot demonstrate the results it achieves through investments in the program. In addition, industry and EPA stakeholders cited the number of air toxics requirements as a challenge to a large or very great extent. Respondents from both groups stated that the agency has insufficient resources to meet such a large number of requirements in the specified time frames. Industry officials noted that the number of requirements was unrealistic, and some EPA stakeholders said that Congress did not understand the number of emissions sources involved or the level of effort required to write standards. EPA and state and local stakeholders also cited the adequacy of resources at the state, local, and tribal levels to implement regulations as a significant challenge. The information available on the costs and benefits of EPA’s air toxics program is not sufficiently comprehensive to measure the overall effectiveness of the program. For example, because of limited data, EPA’s major economic assessments of the Clean Air Act have not included monetized estimates of the program’s benefits, such as reduced incidence of cancer, and have provided only limited information on costs. The absence of information on benefits stems from a lack of data on the extent to which incremental reductions in exposure to air toxics affect an average person’s chance of developing adverse health effects. The agency also lacks reliable data on the quantities of each pollutant emitted prior to the adoption of air toxics regulations or in the years thereafter. Furthermore, other potential indicators of the program’s effectiveness, such as data on compliance with air toxics regulations, are inconclusive. As a result, it is difficult to compare the results of investments in the air toxics program with those generated by clean air programs on which EPA has placed a higher priority. Although EPA has conducted two major assessments of the costs and benefits of its programs under the Clean Air Act, the agency has not fully analyzed the air toxics program primarily because of difficulty in characterizing the program’s effects on public health. Without a comprehensive assessment of costs and benefits, it is difficult to gauge the program’s cost effectiveness or net benefits (total benefits minus total costs) or compare these effects with those of higher-priority air pollution control programs. The two assessments of the act’s costs and benefits focused on separate time periods. EPA refers to the first assessment, completed in 1997, as the “retrospective” analysis because it covered the period 1970 to 1990. It is of limited use in understanding the economic effects of the current air toxics program because this time period predates the significant expansion of the program after the 1990 amendments. The second analysis, completed in 1999, is referred to as the “prospective” analysis because it covered the period 1990 to 2010. This study attempted to forecast the future economic impacts of the 1990 amendments and estimated that the overall net benefits of clean air regulations from 1990 to 2010 would total $510 billion (1990 dollars), with a benefit-to-cost ratio of four to one. Most (over 90 percent) of the monetized benefits included in the analysis stemmed from reduced incidence of health effects associated with exposure to five of the six criteria pollutants—carbon monoxide, ground-level ozone, particulate matter, nitrogen oxides, and sulfur dioxide. EPA places the highest priority within its clean air programs on the criteria pollutants. The prospective analysis is of limited use in understanding the effects of the air toxics program because it provided incomplete information on the costs of air toxics standards and did not include estimates of the human health or other benefits of these standards. Specifically, the cost estimates reflect only the 21 standards EPA had issued at the time of the study—a number that has since grown to 96. EPA estimated that the cost to industry of complying with these 21 MACT standards would total $780 million in 2000 and rise to $840 million by 2010. According to EPA, these estimates primarily reflect the cost of purchasing, operating, and maintaining pollution control equipment. As shown in table 9, these costs represent a relatively small fraction of the total estimated costs of the 1990 amendments over that time period. An EPA official responsible for the prospective study said that the agency did not include estimates for the aspects of the program that it had not yet implemented—such as the 75 remaining MACT standards—because, at the time, the agency did not have information on the number of facilities that would have to comply with future standards or the level of emissions control the standards would require. Without this information, the official said it was appropriate to exclude these future standards from the analysis. Nonetheless, EPA acknowledged the lack of information on the costs of future air toxics standards as a key uncertainty of the analysis. EPA plans to update its cost estimates as part of a new prospective analysis covering 1990 to 2020. The revised cost estimates will include all of the completed MACT standards as well as any other air toxics rules issued by September 2005 (except the residual risk rule for coke ovens, which entails emissions reductions and compliance costs that would have a negligible effect on the overall analysis). An EPA official responsible for the analysis said the agency expects to have preliminary results of the revised cost estimates in late 2006, with a final report expected in 2007. The prospective analysis of the 1990 Clean Air Act amendments did not include monetized estimates of the benefits of air toxics regulations, such as decreased cancer risks to affected individuals, because the agency did not have sufficient data to estimate these effects. As shown in table 10, estimating the benefits of EPA’s air toxics program requires a substantial amount of scientific data. Specifically, this process involves determining the extent to which reductions in exposure to air toxics have decreased the incidence of adverse health effects, including cancer and noncancer illnesses. This, in turn, requires estimating the extent of adverse health effects stemming from exposure to air toxics both before (see steps 1 to 3 below) and after (see steps 4 to 6 below) adopting air toxics regulations. For example, exposure to air toxics prior to the adoption of a regulation may have caused 1,000 cases of cancer per year but the presence of a regulation may have decreased this number to 500 cases per year. The 500 avoided cases would represent a key health benefit of the regulation. The final step of the process (step 7) involves assigning dollar values to these health benefits. Two primary factors limit EPA’s ability to estimate the benefits of air toxics regulations. First, EPA lacks adequate information on the extent to which incremental reductions in exposure affect an average person’s chance of developing adverse health effects. The limited information on these “dose-response” relationships represents the greatest challenge for the agency in conducting a benefits assessment for the air toxics program, according to a senior EPA official responsible for the retrospective and prospective analyses. A senior EPA official responsible for risk analysis drew a distinction between the type of data needed for a risk assessment, which often involves extrapolation from studies involving laboratory animals, and the type of data that economists need for a benefits assessment, which generally requires studies of human exposures. The official said that EPA currently has sufficient toxicological data, primarily from animal studies, to assess risks from 133 of the 187 air toxics. However, the official said the agency only has the type of dose-response data needed to estimate the economic benefits for a handful of pollutants. Second, EPA lacks reliable information on the quantities of each pollutant emitted prior to the adoption of air toxics regulations or in the years after adopting the regulations. EPA has tracked emissions of air toxics since 1993 and prepares a National Emissions Inventory every 3 years. In 2006, EPA completed its most recent inventory, which has information on emissions in 2002. While the inventory represents the best available data on emissions of air toxics and is useful for identifying the relative contribution of emissions from different sources, a 2004 EPA Inspector General report identified shortcomings of the inventory that raise questions about its reliability and usefulness in measuring the effects of the air toxics program. For example, the report said that EPA cannot tell whether apparent reductions or increases in the inventory have resulted from changes in the way the agency estimated the inventory or from real reductions or increases in emissions. The report also cited problems with the limited involvement of state agencies in the development and validation of the inventory. Although the data in the emissions inventory are limited, EPA has used the emissions inventory and other available data to estimate human exposures to these pollutants. In 1999, EPA released its first National-Scale Air Toxics Assessment (NATA), which relied on data from the 1996 emissions inventory to estimate the potential health risks posed by air toxics in different geographic areas. EPA updated this analysis in 2006 using data from the 1999 emissions inventory. While NATA is a useful indicator of potential health risks from air toxics at a given point in time, it is not useful as a measure of the agency’s effectiveness in implementing the air toxics program because, according to EPA, the agency revised the number of stationary sources and pollutants included in its analysis. For example, the analysis based on the 1996 emissions inventory assessed risks from 33 pollutants, while the most recent analysis included 177 pollutants. As a result, EPA believes it is not meaningful to compare the results of the two assessments. Overall, the limited information on health outcomes associated with changes in exposure to air toxics hinders EPA’s ability to quantify or monetize the economic benefits resulting from the air toxics program. In turn, this limits EPA’s ability to develop monetized estimates of the program’s net benefits or cost effectiveness. Such information would be useful not only for better understanding the economic effects of the air toxics program, but also for comparing the cost effectiveness of different air quality programs, which would help prioritize funding in addressing human health problems caused by air pollution. This information would also help EPA prioritize its remaining obligations under the 1990 amendments. In May 2002, EPA’s Office of Research and Development (ORD) released a draft air toxics research strategy that discussed the agency’s plans for improving information on dose-response relationships. In addition, ORD issued an air toxics plan in April 2003 that identified the shortcomings of existing dose-response data and plans for improving this information. In reviewing these documents, the agency’s Science Advisory Board identified several concerns, including poor linkages across the two documents, inadequate research funding, and the need for a better research prioritization scheme. Without sufficient information to conduct a comprehensive cost-benefit analysis, EPA measures the effectiveness of its air toxics program based on estimated data from its emissions inventory. Specifically, EPA measures the changes in aggregate emissions (measured in tons per year) of all air toxics by comparing estimates from the most recent emissions inventory with the 1993 baseline inventory. While estimated emissions decreased by about 35 percent between 1993 and 2002 according to EPA, the data quality problems discussed above limit their usefulness in measuring the program’s effectiveness. Two other problems also limit the usefulness of the emissions data as a performance measure. First, because pollutants differ substantially in their toxicity—small quantities of some chemicals pose greater risks than large quantities of less harmful chemicals—measuring changes in the total tons of all air toxics emitted does not necessarily provide a strong indication of the program’s effectiveness in addressing health risks. The EPA Inspector General report discussed above recommended developing performance measures that address progress toward reductions in human exposure and health risk. Such measures would provide a better indication of changes in risks from air toxics. In the justification for its proposed fiscal year 2007 budget, EPA said that it was developing a “toxicity-weighted” emissions measure for the program. Second, EPA’s practice of measuring the air toxics program’s performance using estimated aggregate emissions data may not accurately measure the effect that the program has had on changes in emissions. The current performance measure attributes all changes in emissions to the federal air toxics program, but emissions may change for reasons unrelated to the agency’s regulations. Some decreases in emissions may reflect cases where state and local air pollution control agencies have issued rules to control emissions that go beyond the federal regulations. As discussed in the next section of this report, some states set more stringent standards than EPA. On the other hand, a senior EPA official responsible for the economic analysis of air pollution regulations said that the agency may actually underestimate the program’s effect. The official said that because of economic growth and related increases in industrial production over time, emissions would far exceed the current levels without the existing EPA air toxics regulations. We also evaluated two other potential indicators of the program’s effectiveness—data on levels of air toxics in the ambient air and information on the degree of compliance with clean air regulations—to determine their usefulness as indicators of the program’s effectiveness. While both could eventually serve as useful performance indicators, the available data are currently limited and inconclusive. Regarding data on ambient levels of air toxics, EPA has a monitoring network that includes 22 locations nationwide. The monitors generally track ambient levels of six priority air toxics that EPA believes pose a concern in all geographic areas of the United States. A 2005 EPA Inspector General report found shortcomings of the monitoring network, including limited monitoring in areas with the highest estimated cancer risks from air toxics as well as inconsistencies in the operation of the monitors. In responding to the report, EPA said that the Inspector General’s concerns generally aligned with the agency’s monitoring improvement efforts. It is currently unclear whether the existing monitoring data are representative or reliable indicators of the program’s effectiveness. Nonetheless, ambient monitoring is a valuable component of the air toxics program and could eventually serve as a useful performance measure. It is also important to note that, while not part of the national monitoring network, a number of state and local agencies conduct their own air toxics monitoring. Finally, we reviewed available information on the degree of compliance with air toxics standards identified through evaluations of regulated facilities conducted by federal and state enforcement officials. As shown in table 11, inspectors have found most facilities in compliance with air toxics standards, with some degree of noncompliance at about one-quarter of all facilities. Compliance rates for these facilities may not represent the degree of compliance at all facilities because enforcement officials do not visit each facility every year and often target facilities where they suspect noncompliance. EPA enforcement officials said they do not currently have comprehensive data explaining the magnitude of the noncompliance in cases where inspectors found violations. For example, noncompliance could range from record-keeping problems to more serious violations, such as exceeding an emissions standard. Furthermore, it is important to note that, while EPA has completed issuing all of the MACT standards, 16 standards have compliance dates after June 2006. Thus, information on compliance with these standards will not become available until after that time. While the available enforcement data are limited, EPA has identified cases of significant noncompliance with air toxics regulations. Specifically, the agency has initiated a nationwide air toxics enforcement strategy to identify and correct noncompliance and achieve emissions reductions in targeted industry sectors. According to EPA, in 2005, the agency took enforcement actions against facilities that failed to comply with targeted MACT standards which resulted in air toxics reductions of more than 160 tons and fines exceeding $600,000 (2005 dollars). Furthermore, an official in EPA’s Office of Enforcement and Compliance Assurance said that the agency achieved about 190 additional tons of air toxics reductions in 2005 through enforcement actions that were not associated with the national air toxics enforcement strategy. State programs we reviewed in California, New Jersey, Oregon, and Wisconsin, and the local program we reviewed in Louisville, Kentucky, have air toxics programs that go beyond the federal program and employ practices that might help EPA enhance the effectiveness of its program. First, these programs address some public health risks that have not been addressed by the federal program. EPA could potentially strengthen its program by assessing and considering what states perceive as the primary gaps in the federal program. Second, the programs generally prioritize air toxics activities based on their risk reduction potential, which could serve as an example for EPA in prioritizing its remaining obligations under the act. Third, some of the programs conduct comprehensive risk assessments to identify the risk posed by all emissions from a facility, while EPA’s residual risk program assesses risk in a more piecemeal and limited fashion. Fourth, several of the programs employ systematic approaches to identify and prioritize chemicals for addition to their lists of regulated air toxics, whereas EPA does not have such a process. Finally, the agencies stressed the importance of reliable data on emissions and chemical toxicity, and several programs have processes to better ensure the accuracy of emissions data submitted by regulated facilities. (See app. III for information on the key features of the state and local programs we reviewed.) The five programs we reviewed address some public health risks that EPA’s program does not. For example, the programs regulate smaller sources than EPA, set more stringent technology standards to control emissions, and include some large stationary sources that EPA does not address. In Wisconsin, any facility that emits one of 535 air toxics in amounts that exceed certain thresholds may be subject to the state’s air toxics program. In some cases, annual emissions of less than 1 pound per year from a facility could trigger the state rule, depending upon the toxicity of the chemical. Wisconsin officials said that they use lower thresholds than the Clean Air Act’s 10- or 25-ton thresholds because even small emissions of very toxic chemicals can present risks to the public. Similarly, New Jersey officials said that their state program addresses smaller facilities than EPA because most of the numerous chemical facilities in the state are not subject to MACT standards since they do not emit air toxics at levels that exceed federal thresholds. In contrast, in accordance with the Clean Air Act, MACT standards for major sources and the corresponding residual risk reviews apply to facilities in 158 industries with emissions of 10 tons or more of a single air toxic or 25 tons or more of a mixture of the 187 federal air toxics. In terms of the stringency of the technology standards used to limit emissions of air toxics, California and New Jersey officials said that the technology standards in their states were often more stringent than EPA’s MACT standards. For example, California officials said that petroleum refineries in their state use more stringent control technologies, and they noted that EPA chose not to include these technologies as part of its survey of controls already in use when it developed the MACT standard for this industry. Regarding the types of facilities that are regulated by EPA, some state officials expressed concern that EPA did not develop MACT standards for some major stationary sources of air toxics in their states. For example, Oregon officials said that they requested EPA to issue MACT standards for several categories of sources, including ceiling tile manufacturing and titanium smelting if it found that they were major sources of air toxics. Oregon officials expressed concern with EPA’s apparent lack of response to their request because these significant emitters of air toxics in Oregon do not fall into one of the 158 major source categories that EPA identified and regulates. Further, the State and Territorial Air Pollution Program Administrators and the Association of Local Air Pollution Control Officials (STAPPA/ALAPCO) has compiled a list of over 40 major emission source categories of air toxics that were not regulated by EPA MACT standards. While the five programs we reviewed would generally address such sources, similar sources would be unregulated in the states whose programs are based entirely on the federal program. Importantly, in a number of cases, state law limits the ability of state and local programs to go beyond federal requirements. For example, in 2002, STAPPA/ALAPCO found that 26 states from every region in the country have precluded their state air pollution control agencies from imposing clean air requirements beyond those established by EPA. The approaches some state and local agencies use to develop their air toxics programs differ from EPA’s approach in that they direct resources to the areas of highest risk, whereas, given the Clean Air Act’s prescribed schedule, EPA has primarily focused on regulating emissions from certain large stationary sources. In contrast, several state and local programs generally rely on monitoring (the measurement of air toxics in the ambient air) and modeling (estimating toxics in the air using computer models) to identify chemicals, geographic areas, or facilities of concern and develop measures to address these risks. The Oregon and Louisville, Kentucky, programs illustrate the use of risk- based prioritization. Oregon’s air toxics program seeks to identify geographic areas of high risk through modeling and monitoring and to then concentrate resources on those areas. While not yet fully implemented, the program plans to conduct statewide modeling using its emissions inventory to identify areas of potential concern and then conduct monitoring to delineate geographic areas of high risk. According to program staff, the geographic approach is an efficient way to address risk because it is targeted and focuses on the greatest risks. Because public health risks from air toxics may vary depending on proximity to emissions sources and other factors, the practice of identifying areas of high risk and taking steps to address these risks shows promise as part of an overall risk reduction strategy. Similarly, Louisville created a program to address high health risks near an industrial complex and in the surrounding community that were identified through monitoring of pollutants in the ambient air. According to program officials, toxic emissions from a section of Louisville called “Rubbertown”—home to a complex of chemical facilities and other manufacturers—have been the subject of public concern since the 1940s. From 2000 through 2001, program officials worked with the University of Louisville, EPA, and other stakeholders to monitor the ambient air near Rubbertown and the surrounding community to assess the extent of the problem. A risk assessment based on the monitoring data determined that 18 chemicals posed an unacceptable risk to the public. Consequently, Louisville officials designed the program to target large emitters of these 18 chemicals before targeting smaller emitters of air toxics. In addition to some states’ focus on identifying geographic areas or chemicals of concern, the state and local programs we reviewed use monitoring and modeling data to focus their efforts on specific facilities that pose risks to the public. For example, California requires certain large and small sources of air toxics to conduct facilitywide risk assessments using a standardized risk-screening model. If the modeling results show that risks exceed certain thresholds, the facility must conduct a more comprehensive risk assessment. This process allows California’s state and local agencies to identify and focus on the sources that pose high risks to the public. In addition, Louisville and Wisconsin require certain sources to conduct facilitywide risk assessments as part of the permitting process. In contrast, several state and local officials said that EPA’s program has not focused on the greatest risks. While EPA may have been driven by certain deadlines in the act, some state and local officials said that the agency has chosen to focus on certain large stationary sources even though EPA’s data suggest that emissions from small stationary sources and mobile sources may pose greater risks. Further, EPA is currently developing a rule that would exempt MACT-regulated facilities from regulation under its residual risk program if, on the basis of risk assessments, the facilities demonstrate that the cumulative risks from all of their toxic emissions do not exceed certain thresholds. According to EPA, this strategy could achieve voluntary risk reductions from facilities that would not be required to reduce risks under the current residual risk program and will provide high-quality, site-specific emissions data for use in future assessments and emission reduction strategies. While this approach has the potential to ease the regulatory burden on low-risk facilities, EPA may have opportunities to apply its limited resources to approaches that have greater potential to reduce risks. Several state and local programs we reviewed generally evaluate the emissions from all of the emissions points within a facility in a single risk assessment in order to assess the health risks associated with the entire facility. In contrast, EPA’s residual risk assessments—aimed at identifying and mitigating any remaining health risk from emissions sources subject to MACT standards—have only evaluated risk from a portion of the facilities. Specifically, to date, EPA has limited the scope of its residual risk determinations to emissions points within facilities that must comply with the MACT standards at issue, although other emissions points may also emit air toxics. As a result, according to several state and local officials, some facilities with a high impact on public health may avoid additional control requirements because EPA’s focus on limited portions of facilities may underestimate the risk posed by whole facilities. Figure 3 illustrates a facility emitting air toxics from four emission points. Of the four emission points within the facility, points 1 and 2 are each covered by different MACT standards and, therefore, are subject to separate residual risk assessments. Emission points 3 and 4 emit air toxics, but are not subject to MACT standards, because emissions from these two points do not exceed the MACT threshold. The programs we reviewed in California, Wisconsin, and Louisville would generally evaluate the emissions from all of the emissions points in this facility in a single risk assessment. In contrast, EPA’s approach to date would be to conduct a residual risk assessment for emission point 1 that would consider the exposure and human health risk attributable to emissions from that emission point, and generally would not consider the emissions from point 2, which falls under a different MACT standard, or the emissions from points 3 and 4. According to EPA, it is not entirely precluded from considering emissions from additional emissions points not covered by the MACT standards at issue, but the agency, to date, has not exercised this discretion in a final rule. Several state and local stakeholders said that they were concerned that EPA’s risk assessments may show a lower level of risk to the public than if the agency considered emissions from all of the emission points at the facility. They said that EPA’s residual risk approach may exclude some facilities with a high impact on public health from more stringent control requirements. Several officials said it would make more sense, from a public health perspective, to consider the impact from all sources at the facility at once, as some states do, rather than review each emission point individually. Along these lines, several EPA officials said that evaluating all of the emissions from a facility simultaneously would enhance the efficiency of the program and better protect public health. Several of the state programs we reviewed use systematic approaches to identify and prioritize chemicals for addition to their air toxics lists. In contrast, EPA has not acted on the requirement to periodically review and revise the list of regulated federal air toxics. For example, California officials work with the state’s public health agency to determine if a substance qualifies as a state air toxic. This process includes assessing (1) the potential for human exposure to a substance, (2) the chemical’s cancer-causing potential, (3) any noncancer effects such as irritation of the lungs or nausea, and (4) the impact on children’s health, among other factors. A panel of scientific experts reviews the work for accuracy, followed by the formal development of a regulation, including a public hearing. Similarly, Oregon works with a committee composed of toxicology, public health, and technical experts to periodically identify air toxics for review and to develop health-based emission benchmarks. The committee prioritizes air toxics for review based on Oregon’s emission inventory, the pollutant’s toxicity or potency, the number of people at risk, and the impact on sensitive populations such as children, among other factors spelled out in state regulations. The systematic approaches of these programs could inform EPA’s efforts to meet the act’s requirement to review and update the federal list of regulated air toxics. Several of the state and local programs we reviewed require major and small stationary sources to submit standardized annual emissions reports and certify their accuracy. These programs, like EPA, rely on emissions inventory data to develop regulations and conduct risk assessments. For example, Wisconsin requires over 2,000 facilities to report emissions of 623 air toxics each year if the facility emits more than certain quantities of each pollutant. Facilities must certify the accuracy of their final submissions. The air toxics program in California similarly requires certain major and small stationary sources to report air toxics emissions of over 450 chemicals and to certify that the data are correct. New Jersey and Louisville have similar requirements for a smaller subset of air toxics and sources. In contrast to the programs that require sources to report and certify their emissions, EPA, to date, generally has not required emissions sources or state or local agencies to systematically report these data. Such data collection could enhance EPA’s analysis and decision making in future air toxics rule makings. However, it is not clear how states without air toxics emissions inventories would comply with a federal requirement or the extent to which the data collected from the states would be comparable. For example, in 2002, EPA solicited comments on a rule to require state and local agencies to submit standardized air toxics emissions inventory data but the agency postponed consideration of the requirements partly due to concerns raised by state and local agencies about the lack of detail in EPA’s proposal. EPA officials also told us that they had concerns over whether there is adequate statutory authority to collect these data. Officials representing the state and local programs we spoke with expressed mixed opinions about a potential EPA requirement to submit standardized air toxics emissions inventories. For example, officials in the states we reviewed except California supported a federal requirement to report air toxics emissions because it would improve the consistency of the federal inventory and its usefulness to states in activities such as risk assessment modeling. In addition, some state officials said that a federal requirement would enable states that are prohibited from having their own programs to collect information on emissions of air toxics. However, several officials cautioned that some programs would have difficulty meeting such a requirement without additional funding. California officials said that EPA should focus on states that do not currently have an inventory. In addition, state and local officials said that EPA does not regularly update chemical toxicity values that describe the potency of different air toxics—key information for conducting risk assessments. These officials told us that their agencies generally do not have the resources to develop quantitative risk estimates for air toxics and must rely on other sources of data, such as EPA’s Integrated Risk Information System (IRIS). According to several officials, the basic science necessary to develop air toxics regulations is lacking in many cases. For example, Oregon officials cited limited and out-of-date toxicity values for a number of common chemicals in the IRIS database. Officials from other programs expressed similar concerns and said that EPA needed to enhance its efforts to provide quantitative toxicity information and conduct studies of sufficient quality to make determinations about chemical toxicity. A 2004 report by the National Academies also identified the need for more timely updates to EPA’s IRIS database. In addition, California officials pointed out that EPA does not have a cancer toxicity value for diesel particulate matter, so some states have developed a patchwork of different toxicity values. Further, state and local officials questioned EPA’s use of a formaldehyde risk factor developed by an industry group instead of its peer-reviewed IRIS value when developing a recent MACT standard for plywood and composite wood products. Several officials were concerned that the deviation from IRIS would cause confusion about what toxicology data were most accurate for state and local requirements. EPA has made some progress in controlling emissions of air toxics, but its overall implementation of the air toxics program falls short of the agency’s statutory obligations because of the limited progress in (1) addressing requirements to limit emissions from small stationary sources and mobile sources, (2) evaluating the residual health risks associated with existing emissions standards and setting additional standards as appropriate, and (3) reviewing and updating the list of regulated pollutants, as appropriate. While EPA places a lower priority on air toxics than other programs that it believes have a greater potential to reduce adverse health effects from air pollution, more comprehensive information on the air toxics program’s costs and benefits would help the agency compare the cost effectiveness of its investments in various clean air programs. Key data issues affecting the agency’s ability to develop more comprehensive cost and benefit estimates include unreliable data on emissions and limited information on the extent to which changes in exposure to air toxics affect the incidence of adverse health effects. Until EPA supports efforts to address these data gaps that hinder its ability to evaluate the health risks of air toxics, the agency will not have assurance that its current priorities and programs necessarily target the areas of greatest opportunity for reducing health risks associated with air pollution. EPA still has a significant number of remaining requirements under the act, including (1) setting 54 emissions standards for small stationary sources; (2) conducting more than 90 reviews of the remaining health risks associated with emissions sources covered by its existing standards, and issuing additional standards as necessary; and (3) reviewing and updating, as appropriate, the list of regulated air toxics. Over the past 15 years, the air toxics program has not met its statutory deadlines, in part, because of its low priority relative to other programs and related funding constraints. Obtaining sufficient funding will continue to pose a challenge for EPA, especially in light of the nation’s current fiscal situation. We believe that developing an implementation plan that identifies the remaining tasks, data needed to estimate the benefits of reductions in exposure to air toxics, timelines, and required funding would improve the management of the program as well as its transparency and accountability to Congress and the public. In addition, EPA could examine state and local approaches to air toxics that may have the potential to more effectively address risks by focusing resources on sources, communities, and geographic areas that face the greatest risks. This would require EPA to evaluate opportunities to enhance its efforts to focus on the greatest risks to human health within the current legislative framework. To improve the management of EPA’s air toxics program and enhance its ability to reduce risks of cancer and other adverse health effects, we recommend that the EPA Administrator require the Assistant Administrator for Air and Radiation to develop an air toxics program improvement plan that incorporates the following five issues: provides a detailed schedule for completing its mandated air toxics activities and identifies the staffing and funding resources needed to meet the schedule and address the health risk assessment needs; prioritizes activities within the air toxics program, placing the highest priority on those actions that have the greatest potential to address health risks, to the extent permitted by the Clean Air Act; establishes a process and timelines for meeting the act’s requirements to periodically review and update the list of air toxics; outlines an approach and timelines for improving the agency’s ability to measure the program’s costs and benefits; and describes how the agency plans to improve its air toxics emissions inventory, including a discussion of the statutory authority for, and the merits of, requiring states and emissions sources to submit standardized emissions data. We provided EPA’s Office of Air and Radiation with a copy of this report for review and comment. In commenting on the report, the Acting Assistant Administrator for Air and Radiation said that EPA agrees in part with the conclusions and recommendations in the report. The agency did not identify specific aspects of our conclusions or recommendations with which it disagreed, but rather provided only clarifications to statements in the report regarding the availability of information on the costs and benefits of the agency’s efforts to control air toxics, the agency’s progress in completing certain air toxics requirements of the Clean Air Act, and on EPA’s management of the remaining requirements. EPA’s letter and our response to their clarifications are included as appendix IV. EPA also provided technical comments, which we have incorporated, as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the EPA Administrator and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. This appendix discusses the Environmental Protection Agency’s (EPA) response to the findings and recommendations of the National Academies’ (Academies) report on air quality management. The Academies prepared this report in response to a congressional request for an independent evaluation of the effectiveness and implementation of the Clean Air Act. The report examined the roles of science and technology in the implementation of the act and recommended ways to improve air quality management. One of the report’s key recommendations was for EPA to form a work group to evaluate the report and provide a detailed list of actions EPA could take to improve its implementation of clean air programs. The work group completed this review in December 2004 and provided the agency’s Clean Air Act Advisory Committee with a list of 38 recommendations. EPA’s Office of Policy Analysis and Review has taken the lead in responding to the recommendations and provided an initial response in April 2005, which was updated in November 2005. The response included information about ongoing and proposed activities to address the recommendations and estimated time frames for responding to each recommendation. The agency has prioritized the recommendations and developed a proposed schedule for completing its activities, with some actions already under way or completed and others not scheduled for completion until fiscal year 2008. Based on our review of available documents and discussions with EPA program managers, the agency has taken affirmative steps to respond to a number of the recommendations, and its proposed actions generally appear responsive to the Academies’ findings. A comprehensive evaluation of EPA’s response to the Academies’ recommendations will not be possible until the agency has made further progress in implementing its proposed response actions. We were asked to assess (1) EPA’s progress toward implementing the air toxics program and any implementation challenges the agency faces, (2) what available information indicates about the costs and benefits of EPA’s efforts to control air toxics, and (3) the program design and management practices of state and local air toxics programs that could potentially help EPA enhance the effectiveness of the federal program. In addition, we were asked to assess EPA’s progress in responding to recommendations pertaining to the air toxics program made by the National Academies in 2004. To respond to the first objective, we updated our previous analysis of the agency’s progress in implementing program requirements. We reviewed the requirements of the Clean Air Act Amendments of 1990 and EPA’s actions to respond to these requirements, including the number of regulations the agency promulgated and other requirements to issue reports and guidance. Specifically, we considered EPA’s Maximum Achievable Control Technology (MACT), small stationary source, mobile source, residual risk, and technology review activities, and other activities in the act that were specifically related to air toxics. We also evaluated the timeliness of EPA’s actions versus the schedule mandated by the act by comparing the dates specified in the act with the dates on which EPA published the rules in the Federal Register. We independently developed a list of actions required by EPA and worked with agency officials to refine and confirm the list we used. We made minor modifications to the list approved by EPA to account for the promulgation of residual risk and area source standards, to separately count area source standards issued in conjunction with MACT standards, and to delete source categories that were delisted. To determine the priority of the air toxics program relative to other air programs, and the priorities within the air toxics program, we met with senior air program officials and analyzed budget data submitted by EPA. Specifically, we compared the funding for EPA’s air program as a whole with the funding for the air toxics program. To identify the implementation challenges EPA faces, we reviewed available studies by the National Academies, the Office of Management and Budget (OMB), and the EPA Inspector General. We identified nine implementation challenges, such as the adequacy of program funding and the priority of the program relative to other air programs, and developed a structured interview in order to evaluate the magnitude of the challenges identified by these studies in the opinions of various stakeholders. We pretested the interview questions and made relevant changes to the questions based on the pretest. We designed the structured interview so that respondents could rate each implementation challenge on a scale from 0 (not a challenge at all) to 4 (a challenge to a very great extent). When conducting the interviews, we asked followup questions if the respondents rated the challenge as a 3 (a challenge to a large extent) or 4, such as what they thought could be done to address the challenge. We also provided a list of key definitions, an explanation of the rating system, and a description of each challenge to respondents prior to conducting each interview. We conducted structured interviews with a nonprobability sample of 22 officials, including 8 EPA, 5 industry, 4 environmental, and 5 state and local officials. Specifically, for EPA, we interviewed senior officials within the Office of Air Quality Planning and Standards and the Office of Transportation and Air Quality. We identified national-level environmental and industry stakeholders through consultation with EPA (and referrals from contacts identified through this consultation) and membership on the Clean Air Act Advisory Committee. The five industry groups we interviewed were the American Forest & Paper Association, the American Petroleum Institute, the Council of Industrial Boiler Owners, the American Chemistry Council, and the Alliance of Automobile Manufacturers. The four environmental groups we interviewed were the Natural Resources Defense Council, Environmental Defense, Earth Justice, and an air toxics consultant recommended by environmental stakeholders and EPA. We interviewed officials from state and local programs in California, New Jersey, Oregon, Wisconsin, and Louisville, Kentucky. Following the structured interviews, we determined the most significant challenges for all of the stakeholders by averaging the ratings from all 22 respondents for each challenge. However, because ratings of the most significant challenges differed for each stakeholder group, we also averaged the scores for each challenge for each stakeholder group. We identified the greatest challenges identified by each stakeholder group (an average rating of 3 or higher, or those rated as challenges to a large or very great extent) to assess how perceptions of the challenges differed among the stakeholder groups. To respond to the second objective, we analyzed available information on the economic impacts of the program, as well as data on trends in emissions, health risks, and compliance. Regarding data on economic impacts, we reviewed EPA’s 1997 and 1999 reports to Congress on the economic impacts of the Clean Air Act as well as the agency’s guidance for analyzing the effects of air pollution regulations. We also met with EPA staff in the Office of Air Quality Planning and Standards and Office of Policy Analysis and Review responsible for analyzing the economic effects of clean air regulations. Regarding emissions and monitoring data, we met with EPA staff responsible for maintaining the National Emissions Inventory, reviewed the agency’s documentation and plans for improving the inventory, and reviewed relevant reports by EPA’s Inspector General. Regarding data on health and risks, we met with EPA staff responsible for risk assessment and the development of the National-Scale Air Toxics Assessment. We also reviewed EPA’s methodology for developing the assessment and available information on the risk assessment process. We obtained compliance data from the Office of Enforcement and Compliance Assurance’s Air Facility Subsystem. We reviewed these data for obvious completeness and consistency problems, reviewed available documentation, and interviewed the system administrator. Unless otherwise noted, we determined the data were sufficiently reliable for the purposes of this report. To respond to the third objective, we reviewed a nonprobability sample of air toxics programs from California, New Jersey, Oregon, and Wisconsin and from Louisville, Kentucky to identify innovative program designs or management practices. We focused on programs that (1) went beyond federal standards, (2) were identified by EPA and other stakeholders as innovative programs, (3) used strategies to address air toxics that differed from those used by EPA, and (4) represented a range of geographic locations and experience addressing air toxics. Specifically, we asked EPA and the State and Territorial Air Pollution Program Administrators and the Association of Local Air Pollution Control Officials (STAPPA/ALAPCO), the stakeholders most knowledgeable about state and local air toxics programs, whether there were specific programs we should review, and used their recommendations as selection criteria. GAO conducted independent research to confirm that the selections cited by these stakeholders were reasonable, including analyses of the stringency of state and local air toxics programs based on current law, policy, and guidance documents and summary documents developed by EPA and state and local agencies. We visited each program selected for review and conducted semistructured interviews with state and local officials. We developed an interview protocol and revised it after limited testing with respondents. The semistructured interview included questions about how the programs interact with EPA, how the program views EPA’s current and future requirements, regulate different chemicals and sources, account for risk, collect emissions inventory data, and measure progress, among other factors. We focused primarily on practices that EPA might find useful in addressing its program implementation challenges and did not evaluate the effectiveness of the state and local programs we reviewed. Our discussion of the practices employed by these programs should not be construed as an endorsement of any particular approach but rather as an acknowledgement that alternative strategies exist. In addition, we obtained information about EPA’s response to the recommendations of the National Academies’ 2004 report entitled Air Quality Management in the United States. We reviewed the recommendations in the report, the associated recommendations of the Clean Air Act Advisory Committee, and EPA’s actions to respond to these recommendations. We worked with EPA officials to determine whether EPA’s actions addressed the recommendations. Our work included an assessment of data reliability and internal controls. We conducted our work from June 2005 to June 2006 in accordance with generally accepted government auditing standards. This appendix provides general information on the nonprobability sample of four state and one local air toxics programs we reviewed to identify innovative program designs or management practices. We focused on programs that (1) went beyond federal standards, (2) were identified as innovative by EPA and other stakeholders, (3) used strategies to address air toxics that differ from EPA’s, and (4) represented a range of geographic locations and experience addressing air toxics. Table 12 presents basic information about the programs we reviewed, followed by profiles of each program. California’s air toxics program regulates certain new and existing major stationary sources, small stationary sources, and mobile sources more stringently than EPA. In 1983, the state legislature adopted Assembly Bill 1807, the Toxic Air Contaminant Identification and Control Act, which defined a process for identifying chemicals that qualify as state air toxics and developing control standards to reduce emissions from certain sources based on the application of pollution control technology. California has listed 245 toxic air contaminants as of May 2006. The state regulates diesel particulate matter emissions from motor vehicles, such as school buses, under its program. In 1987, the state legislature passed an additional law, Assembly Bill 2588, the Air Toxics “Hot Spots” Information and Assessment Act, which required the submission of air toxics emissions inventory data from certain facilities and notification of local residents of significant risk from nearby sources of air toxics. Under this act, certain sources of air toxics must conduct risk assessments to determine their health impact on the community. In conducting these risk assessments, regulated facilities must consider the risks posed by their emissions of 451 different chemicals. In 1992, the legislature passed an amendment to the “hot spots” law that required facilities that pose a significant health risk to the community to develop risk management plans. Policy documents and other information are available at the program’s Web site http://www.arb.ca.gov/toxics/toxics.htm. New Jersey’s air toxics program regulates certain large and small stationary sources more stringently than EPA through the state’s permitting program. The New Jersey Air Pollution Control Act of 1954 requires new or modified sources that emit air pollutants, including air toxics, to incorporate state-of-the-art air pollution controls to reduce their emissions. In 1979, the New Jersey Department of Environmental Protection (DEP) adopted a regulation that specifically addressed air toxics emissions. This rule listed 11 air toxics and required sources emitting these chemicals to register with DEP and demonstrate that they utilize state-of-the-art controls to limit their emissions. The department incorporates control requirements for other air toxics on a case-by-case basis as part of the permitting process. In the early 1980s, the DEP instituted a risk assessment policy to better ensure that sources with state- of-the-art controls protect public health. The risk assessment policy requires certain facilities seeking permits to estimate the risk to the community that remains after the application of technology standards and to take additional measures as necessary to meet health-based targets established for 237 air toxics. General information about New Jersey’s air toxics program is available at http://www.state.nj.us/dep/airmon/airtoxics/, and policy documents, such as risk assessment policies are available at http://www.state.nj.us/dep/aqpp/risk.html. Oregon’s air toxics program is authorized to go beyond federal requirements for some large and small stationary sources. In November 1998, the Oregon Department of Environmental Quality (DEQ) convened a broad-based stakeholder group to outline a program to complement the existing federal program and reduce the impact of air toxics in Oregon. DEQ worked with stakeholders until the adoption of Oregon’s air toxics rule on October 9, 2003. The rule requires sources in specific geographic areas of high risk to develop, with other stakeholders, a risk reduction plan to meet certain health based goals. In addition, some stationary sources may be required to estimate and mitigate the risk they pose to the public and apply control technologies. The program is still being developed and has not been fully implemented. Policy and guidance documents and other information are available at the program’s Web site http://www.deq.state.or.us/aq/hap/index.htm. Wisconsin’s air toxics program regulates certain new and existing stationary sources more stringently than EPA. In 1983, the Wisconsin Department of Natural Resources (DNR) formed a group of scientists, industry, environmental, and government stakeholders in response to public concern about the health effects of air toxics and the lack of policy and regulations at the federal level. The group recommended an approach for a state air toxics rule in 1985, and DNR developed a rule that became effective in 1988. This original rule was rewritten and redeveloped from 2000 through 2004 using an advisory committee process that included government, industry, and environmental stakeholders. The final rule became effective in July 2004. The rule lists 535 air toxics and requires certain facilities that emit specific amounts of cancer-causing air toxics to apply control technology to reduce emissions. In addition, certain facilities that emit other air toxics beyond specific thresholds must estimate the risks posed by these chemicals and meet health-based standards. Guidance documents and other information are available on the program’s Web site, http://www.dnr.state.wi.us/org/aw/air/health/airtoxics/. In September 2004, the Louisville Metro Air Pollution Control District prepared a draft Strategic Toxic Air Reduction (STAR) program in response to air monitoring that documented and modeled data that suggested that air toxics posed significant risks to the community. Adopted by the Louisville Metro Air Pollution Control Board in June 2005, the STAR program requires certain facilities to estimate the risk posed by their air toxics emissions and to reduce the risk, potentially through the application of control technologies, to meet certain health-based goals. Louisville’s program first focuses on emissions of 18 air toxics that posed unacceptable risk to the public based on monitoring studies. In total, the STAR program applies to new or modified processes and process equipment that will emit any of 191 air toxics, and existing sources that emitted any of 37 air toxics in quantities that exceed certain thresholds. Policy documents and other information are available on the program’s Web site, http://www.apcd.org/star/. 1. Regarding our discussion of the economic effects of air toxics regulations, EPA stated that the agency finds it appropriate to focus risk assessments and benefits analysis on the air toxics that pose the most significant risks within the context of the residual risk program. EPA’s letter also stated that such an approach would assist the rulemaking process to a greater extent than comprehensive assessments of the total benefits and costs of all air toxics controls. While EPA may hold this view, the Clean Air Act requires the agency not only to assess residual risks after completing the MACT standards, but also to periodically assess the costs and benefits of clean air programs. Regarding the first set of requirements, EPA was late in issuing almost all of the MACT standards and is already well behind schedule in completing the residual risk assessments. With respect to the second set of requirements, EPA’s economic assessments of clean air programs have included limited information on the costs of regulating air toxics and have not included monetized estimates of the human health or other benefits—either for individual pollutants or for all of the pollutants in total. More complete information on costs and benefits would help the agency, Congress, and the public understand the effects of the air toxics program and enable the agency to compare the net benefits of the air toxics program with those achieved under other clean air programs on which the agency has placed a higher priority. 2. In its letter, EPA stated that GAO uses an inappropriately narrow measure of progress in regulating air toxics and that the agency has issued a number of regulations that control air toxics as a side benefit. However, as we discuss in the report, data limitations compromise the usefulness of other performance measures. EPA has indeed taken regulatory actions outside of the air toxics program that control toxic emissions as a side benefit. However, the progress—in terms of emissions reductions—that EPA cites should be considered in the context of the limitations of the emissions data discussed in this report. For example, the EPA Inspector General has reported that EPA cannot tell whether apparent reductions or increases in emissions have resulted from changes in the way the agency estimates emissions or from actual reductions. It is also important to note that EPA does not expect some of the emissions reductions cited in its letter to occur until 2020. Furthermore, EPA’s most recent data on risks from air toxics identifies benzene—a known carcinogen emitted primarily by mobile sources—as a national risk driver that accounts for 25 percent of the cancer risks posed by air toxics across the nation. This suggests that EPA has substantial opportunities to further address air toxics risks from mobile sources. Finally, the Clean Air Act mandated specific actions and timelines for evaluating and regulating toxic emissions from mobile sources. As discussed in this report, the agency has missed its deadlines for completing these actions but has proposed a mobile source air toxics rule that it intends to finalize in 2007. 3. In response to our finding that EPA lacks a strategy for managing its implementation of the remaining air toxics requirements, the agency’s letter stated that the Clean Air Act provides a road map for air toxics and that EPA developed an integrated air toxics strategy in 1999. EPA also stated that the agency is developing a strategy to respond to its court-ordered deadlines for completing certain air toxics requirements. As discussed in the report, EPA has missed most of the act’s deadlines related to air toxics and has not fully implemented the actions outlined in its integrated strategy. Additionally, EPA’s discussion of its efforts to meet court-ordered deadlines underscores the need for more proactive management. In addition to the contact named above, Christine Fishkin (Assistant Director), Jennifer Dougherty, Cindy Gilbert, Tim Guinane, Michael Hix, Andrew Huddleston, Karen Keegan, Alison O’Neill, Judy Pagano, Melissa Saddler, and Joseph Thompson made significant contributions to this report.
The Environmental Protection Agency's (EPA) most recent data indicate that 95 percent of all Americans face an increased likelihood of developing cancer as a result of breathing air toxics--pollutants such as benzene and asbestos that may cause cancer or other serious health problems. Sources of air toxics include large industrial facilities, smaller facilities such as dry cleaners, and cars and trucks. The 1990 Clean Air Act Amendments required EPA to regulate 190 pollutants from these sources through a multifaceted regulatory program. While EPA issues federal standards, state and local agencies generally administer these standards, and some develop their own rules to complement the federal standards. In this context, GAO was asked to assess (1) EPA's progress and challenges in implementing the air toxics program, (2) available information on the program's costs and benefits, and (3) practices of state and local air toxics programs. While EPA has made some progress in implementing its air toxics program mandated by the 1990 Clean Air Act Amendments, most of its regulatory actions were completed late and major aspects of the program have still not been addressed. Most of EPA's progress relates to issuing emissions standards for large stationary sources, although EPA completed these standards about 4 years behind schedule. However, many of the unmet requirements pertain to limiting emissions from small stationary and mobile sources, which collectively account for most emissions of air toxics. The agency faces continuing implementation challenges stemming from the program's low priority relative to other programs and related funding constraints. To this end, the agency lacks a comprehensive strategy for completing the unmet requirements or estimates of resources necessary to do so. Senior EPA officials said the program's agenda is largely set by external stakeholders who file litigation when the agency misses deadlines. As a result of EPA's limited progress, the agency has not addressed health risks from air toxics to the extent or in the time frames envisioned in the Clean Air Act. Senior EPA officials said that issuing standards for large stationary sources had addressed the greatest risks from air toxics and that other clean air programs also control air toxics as a side benefit. However, EPA does not have reliable data on the degree of risk reduction achieved through its regulations. Furthermore, the data that are available suggest that the agency has substantial opportunities to reduce emissions from mobile and small stationary sources. Available information on EPA's efforts to control air toxics is not sufficiently comprehensive to measure the program's total costs and benefits. Specifically, EPA has not comprehensively estimated the national economic costs of all air toxics standards and lacks the data necessary to assess the benefits of these standards, such as decreased incidence of cancer. Information on these impacts would help the agency assess the overall net benefits (total benefits minus total costs) of the air toxics program and compare these effects with those generated by higher-priority clean air programs, such as those intended to address smog. Data on other indicators of the program's effectiveness, such as changes in emissions, concentrations of air toxics in the (ambient) outdoor air, and data on compliance with air toxics standards are also limited and inconclusive. The state and local programs we reviewed use practices that could potentially help EPA enhance the effectiveness of its air toxics program. For example, several state programs have systematic approaches for identifying and prioritizing new pollutants that could inform EPA's efforts to meet the act's requirement to review and update the list of regulated pollutants.
As depicted in figure 1 below, the services maintain highly trained EOD personnel responsible for eliminating explosive hazards in support of a range of events, from major combat operations and contingency operations overseas to range clearance to protecting designated persons, such as the President of the United States. The services’ EOD forces are dispersed worldwide to meet combatant commanders’ requirements. Units may be deployed together or organized into smaller teams, as missions require. EOD technicians generally work in two- or three-person teams to identify and disarm ordnance. To meet increased demands for EOD personnel, the services increased the size of their EOD forces. Based on available data, we determined that the services grew from about 3,600 personnel in 2002 to about 6,200 in 2012—an almost 72 percent increase. The services anticipate that even as forces withdraw from recent operations the need for EOD forces will continue, so they intend to maintain their larger size. However, the respective services’ abilities to identify and track spending on EOD activities vary, so DOD does not have complete information on EOD spending. The House Armed Services Committee directed DOD to establish a consolidated budget justification display fully identifying the services’ baseline EOD budgets, but DOD has not done so. Without complete EOD spending information, the services and DOD may have difficulty in justifying the future EOD force structure, and in informing future funding plans and trade-offs among competing priorities. Over the past decade of military operations, the services all took actions to increase their EOD capabilities. As figure 3 shows, the Army more than doubled its EOD forces, with the largest increases occurring after fiscal year 2006. The Marine Corps and the Navy increased their EOD forces by approximately 77 percent and 20 percent, respectively. The Air Force increased its EOD forces by approximately 36 percent. The services anticipate maintaining EOD personnel numbers at the 2012 level at least through the next 5 years, as is also shown in figure 3, although final decisions on EOD force size and structure will depend on future DOD budgets. According to DOD EOD officials, the time required to train qualified EOD personnel is lengthy; therefore, EOD is not a capability that can be built up quickly. Meeting the demands for EOD forces in combat operations has negatively affected EOD units’ personnel and ability to train for other missions. Each of the services met the combatant commanders’ high demand for EOD personnel by deploying EOD units and assuming risks as other EOD missions and training activities were left unfulfilled. For example, according to service officials, the services assumed some risks in some mission areas such as countering sea mines, clearing unexploded ordnance on training ranges, and providing defense support to civil authorities. EOD personnel who participated in our group discussions said they experienced multiple deployments and limited time at home, maintaining a pace that was exacerbated by time spent away from home in training and in support of the U.S. Secret Service and Department of State in the protection of important officials. (Appendix II summarizes, in greater detail, the operational and personnel issues raised by EOD personnel who participated in group discussions we held.) As the services begin turning their focus away from training for deployments to Afghanistan to counter IEDs, officials from across the services and DOD officials noted that the services will need to retain the current EOD size so that they can expand training for their core missions and prepare for future requirements. For example, according to Navy officials, Navy EOD personnel will re-emphasize skills—such as diving— that are needed for their core missions. EOD forces will also be assigned to traditional missions to fill gaps in capabilities that came about when EOD forces were deployed to Afghanistan and Iraq. For example, according to DOD officials, EOD personnel will be available to combatant commanders for humanitarian demining, irregular warfare, and building international partner capacity activities. Also, according to Army officials, Army EOD forces will be available to respond more quickly to incidents involving unexploded military ordnance found in local communities. In addition, EOD forces will continue to provide support to the Very Important Persons Protection Support Activity’s mission of ensuring the safety of federal officials, such as the President, as they travel. In fiscal year 2012, the services provided more than 473,000 hours of support to this activity; as a presidential election year, 2012 required greater than the annual average of about 300,000 hours provided from fiscal years 2007 through 2011. DOD officials expect EOD capabilities to continue to be in demand by combatant commanders for the foreseeable future. For example, officials believe that the IED threat is likely to persist given its low cost and high accessibility to non-state adversaries. In addition, based on the primary the services missions highlighted in DOD’s current strategic guidance,anticipate continued requirements for EOD capabilities. For example, EOD capabilities are expected to be needed for several missions including, among others: (1) countering terrorism and irregular warfare; (2) countering anti-access/area denial measures, including mining; (3) countering weapons of mass destruction; and (4) providing support to civil authorities. The mission of countering anti-access/area denial measures, in particular, will require Navy EOD forces to train to counter anti-access measures that use sea mines and to clear explosive obstacles in sea lines of communication. Also, Air Force EOD forces will be expected to train to support recovery operations to keep air bases and runways clear of unexploded ordnance. All the services’ EOD forces will need to continue to train for homeland missions of providing support to civil authorities. In addition, EOD forces’ ability to conduct humanitarian demining activities can support combatant commanders’ efforts to help build relationships with other countries, according to DOD officials. The mission of the Joint IED Defeat Organization is to lead, advocate, and coordinate all DOD actions in support of the combatant commanders’ and their respective joint task forces’ efforts to defeat IEDs as weapons of strategic influence. A primary role for the organization is to provide funding and assistance to rapidly develop, acquire, and field counter-IED solutions. were complete and accurate. The Army regulation on EOD assigns responsibility for monitoring funding for the Army EOD program to the Deputy Chief of Staff for Operations, Plans, and Training (G-3/5/7). Officials in that office, however, told us they do not have access to complete funding information because funding for EOD activities is spread across multiple programs, functions, or organizations. Officials who oversee the EOD program told us they would like to have funding data to assist in managing and prioritizing the Army’s EOD operations, but they have no plans currently to collect it. The Marine Corps EOD program officials could readily provide us with procurement funding information, but not with comprehensive information on funding that could have come from other funding accounts, such as military personnel and operation and maintenance. Officials in the office of the Navy EOD program resource sponsor could provide funding information because the Navy has a dedicated EOD program element code and it tracks funding for the EOD capability separately on a continuous basis to enable it to manage its own capability. Officials in the Air Force EOD program oversight organization could provide funding information on operating, maintaining, and procuring equipment and other items for the Air Force EOD force that it compiles and uses to manage the Air Force EOD program. However, it could not readily provide information on military personnel because, according to officials, personnel are accounted for across more than 30 program elements. Starting in fiscal year 2013, the Air Force began to use a dedicated EOD program element code to enable better identification of current EOD spending and to provide justification for the EOD capability in future budget requests. The services anticipate maintaining their currently authorized EOD personnel levels. However, planning for the future EOD capability may be hampered by DOD’s lack of visibility into the current costs of EOD capabilities across the services. According to officials from each service, overseas contingency operations funding has been used to provide equipment and training to EOD forces for the past several years, such as that shown in figure 4 below. In the future these costs will have to be funded from regular appropriations, and to compete with other service priorities. For example, the services received EOD robots and mine resistant ambush-protected vehicles from overseas contingency operations funding. Should EOD units need to continue to use the equipment in the future or to acquire similar such equipment, maintaining and procuring it may have to be funded through regular appropriations. Overseas contingency operations funding also provided advanced homemade explosive, forensic, and medical training opportunities that EOD technicians in our discussion groups thought were valuable to their missions and their safety. Service officials expressed concerns to us about the adequacy of future funding for their EOD forces after overseas contingency operations funding is phased out, but the extent to which the services have identifiable funding plans for future EOD activities varied. The Navy and Air Force now have program element codes that enable service officials to identify and evaluate the appropriate level of spending on their EOD capabilities. However, the Army’s and Marine Corps’ lack of complete data on the costs of their current EOD forces negatively affects their efforts to develop viable funding plans for supporting their EOD capability into the future. Until the services have information on current spending as well as justification for their future funding needs, service and DOD leadership will be unable to effectively identify resource needs, weigh priorities, and assess budget trade-offs within anticipated declining resources. Moreover, the lack of visibility into current spending and future funding plans may impede DOD’s ability to provide Congress with information needed to facilitate its oversight. EOD forces have operated jointly in Iraq and Afghanistan to fulfill battlefield requirements, and the services have jointly developed guidance on tactics, techniques, and procedures for EOD forces, but DOD has not fully institutionalized the guidance through joint EOD doctrine in the form of a Joint Publication. According to DOD, the purpose of joint doctrine is to enhance the operational effectiveness of U.S. joint forces. It is written for those such as the services, among other recipients, who prepare and train forces for carrying out joint operations. Joint doctrine facilitates planning for and execution of operations, and it establishes a link between what must be accomplished and the capabilities for doing so by providing information on how joint forces can achieve military strategic and operational objectives in support of national strategic objectives. According to service EOD officials, joint doctrine also provides standardized terminology. EOD personnel and officials told us, however, that they had encountered repeated challenges—such as a lack of planning for EOD capabilities as well as variations among the services’ procedures—during joint combat operations. A key reason for the services’ challenges is the absence of a consistent understanding of EOD operations, including expectations for how forces should plan operations and work together. The services are disadvantaged with respect to EOD capabilities, knowledge, and use because DOD has not developed joint doctrine in the form of a Joint Publication. In 2001, DOD’s Air Land Sea Application Center published multi- service guidance outlining a set of EOD tactics, techniques, and procedures for employing EOD forces jointly in a range of military operations. The guidance, which was updated in 2005 and 2011, applies to leaders, planners, and EOD personnel and provides information that can help them understand each military service’s capabilities. The multi-service guidance, as updated, has been in place for more than a decade, but challenges in joint EOD combat operations have continued, as prior DOD studies on EOD capabilities have reported. One study described EOD forces’ efforts to work jointly as being “somewhat ad hoc” and noted that culture, technique, and language differences among the military services caused challenges in working together. Another study reported on some combat unit commanders’ limited awareness of EOD capabilities during overseas combat operations. For example, in some instances Army combat unit commanders did not understand the differences between the capabilities of EOD personnel who are trained to safely disarm ordnance and the capabilities of Army combat engineers, some of whom have limited training on disposing of select unexploded ordnance. As a result, some combat engineers, who are not trained to safely disarm ordnance, destroyed ordnance caches, resulting in improper handling of some of the ordnance caches and creating a more dangerous site containing debris as well as potentially unexploded ordnance. EOD personnel we spoke with reported experiencing similar challenges during recent deployments in support of joint combat operations, and as the observations recorded below indicate, they attributed these challenges to a lack of understanding of EOD operations on the part of non-EOD forces. EOD personnel indicated the following: Commanders of combat units did not always take into account differences among the military services’ EOD forces. For example, Marine Corps EOD personnel commented that their EOD standard operating procedures include conducting dismounted patrols, while at one time the Army’s EOD personnel were not allowed to dismount from vehicles to conduct EOD operations. These Marine Corps personnel stated that Marine Corps commanders were unsure how to work with Army EOD forces supporting Marine units. Similarly, Air Force personnel said they were not trained to dismount to search for explosives, as Marine Corps commanders expected them to be. Non-EOD personnel in combat units did not always understand EOD protocols or capabilities. For example, Army EOD personnel cited an instance in which a non-EOD officer at a forward operating base picked up post-blast fragments of an improvised explosive device at a blast site, thus disturbing the site and contaminating potential forensic evidence. Other examples include commanders not securing sites where unexploded improvised explosive devices were found, and non-EOD personnel tampering with unexploded devices in an attempt to deactivate them before EOD personnel arrived. Requests for EOD support did not always take into account differences in how the various services’ EOD forces are organized. For example, an Army EOD Company contains approximately 40 people, while a Navy EOD Mobile Platoon has 8 people. According to Navy EOD personnel, battlefield commanders they supported sometimes received a smaller EOD force than was needed and expected, or conversely, sometimes received a larger force, which would require more logistical support than planned. We found that pertinent Navy and Air Force regulations only briefly mention joint EOD operations, and they refer to the multi-service guidance as a source for more detailed information. However, the Army’s and Marine Corps’ regulations do not discuss joint EOD operations in detail or refer to the multi-service guidance. A DOD study assessing the joint EOD capability reported that this multi-service guidance was not widely used by combatant commands.officials, this multi-service guidance is not as authoritative to the services In addition, according to DOD and combatant commands as a Joint Publication, and is generally not used by the services to develop force requirements. Prior DOD studies have highlighted the need for joint doctrine on EOD operations and noted that current guidance is insufficient, but the Joint Staff has not published joint doctrine for EOD operations. We found that several DOD doctrinal joint publications refer to EOD activities, but most references are limited, recognizing the need for EOD but not providing additional guidance as to how such capabilities should apply to operations. EOD is briefly mentioned or discussed in greater detail in 30 unclassified or for official use only doctrinal joint publications. For example, the role of EOD is briefly mentioned in joint doctrine about operations such as antiterrorism, foreign humanitarian assistance, and evacuating noncombatants. The EOD role is more fully discussed in joint doctrine about countering the improvised explosive device threat, joint engineer operations, and addressing obstacles—such as unexploded ordnance and sea mines—that could be encountered by joint forces in a range of military operations. According to a Chairman of the Joint Chiefs of Staff manual on joint doctrine development, part of the development philosophy for joint doctrine is that it continues to evolve as the United States Armed Forces adapt to meet national security challenges. However, none of these joint publications addresses the full range of EOD capabilities and potential activities. As previous assessments of DOD’s joint EOD capabilities have reported, the lack of joint EOD-specific doctrine limits EOD forces’ and planners’ ability to identify and mitigate capability gaps. According to the Chairman of the Joint Chiefs of Staff manual, joint doctrine projects must be formally sponsored by a service chief, a combatant commander, or a director of a Joint Staff directorate. A key reason why joint EOD doctrine has not been developed is that no entity has been made accountable for following through on recommendations and sponsoring EOD doctrine, to include stakeholder coordination, for development of joint guidance. Although several organizations have responsibilities for some EOD functions, no one entity has been designated as the focal point on joint doctrine or operational issues. For example, the Assistant Secretary of Defense for Special Operations and Low Intensity Conflict, as the Office of the Secretary of Defense proponent for EOD, is charged with developing, coordinating, and overseeing the implementation of DOD policy for EOD technology and training, but that official is not involved with oversight of joint doctrine or operational issues. Similarly, the joint EOD Program Board, chaired by a flag officer designated by the Secretary of the Navy and comprising general officer representation from the other services, is generally focused on joint common EOD technology and training issues. One DOD EOD study, which according to a Joint Staff official was initiated by a former Vice Chairman of the Joint Chiefs of Staff, recommended among other things that the Joint Staff sponsor the development of joint EOD doctrine. However, during DOD’s review process that recommendation was sent back to the Joint Staff for reconsideration, where, according to a Joint Staff official, the matter was dropped and never presented to senior leaders. Having joint guidance, such as joint doctrine, could put combatant commanders in a better position to make decisions about using EOD forces in future operations. In addition, joint EOD doctrine could provide a basis for planning and identifying capability requirements for future operations. Having joint doctrine that is developed and approved by the Chairman of the Joint Chiefs of Staff as authoritative guidance would enhance DOD’s EOD forces’ ability to operate in an effective manner and better position the military services to identify capability gaps in meeting service, joint, and interagency requirements; to invest in priority needs; and to mitigate risks. As IEDs became a significant threat to U.S. forces in Iraq and Afghanistan, EOD emerged as a critical capability, and over the past decade, the services increased the size of their EOD forces. Growing the EOD capability takes time because of the highly technical training required and the additional experience needed to become proficient in handling dangerous unexploded ordnance. Looking toward the future, DOD and the services believe that broad demand for this capability will continue. Growth of the EOD forces until now has been funded in part by overseas contingency operations dollars, but these funds are likely to decrease as operations in Afghanistan diminish. A major challenge facing the EOD community, especially the Army and Marine Corps, is the lack of complete information to clearly show the resources it will take to sustain their larger force levels. Further, DOD does not have good visibility into service spending on EOD forces. Without comprehensive information on the costs of the services’ EOD forces, senior service and DOD leaders are not well positioned to justify the current EOD force structure or to ensure that funding goes to priorities in accordance with strategic guidance. In addition, the absence of comprehensive information limits DOD’s ability to respond to congressional requests for budget information and may continue to hamper Congress’ oversight of the health and viability of the EOD force. Although EOD forces from each of the services have deployed together to support recent ground operations, attention to EOD as a joint capability has been limited. Differences among the services’ procedures complicate joint force planning and operations, and there is little common understanding of the EOD capability outside of the EOD force. The lack of understanding of EOD capabilities among those battlefield commanders has caused them challenges in providing the right capabilities to maximize the effectiveness of operations to protect U.S. forces and to collect information to defeat the networks of insurgents using IEDs against U.S. forces. A number of publications refer to EOD and its capabilities for specific functions, but none provides clear and complete guidance for integrating the activities of EOD forces with other combat activities and maximizing the capabilities that EOD forces provide. In addition, no entity has followed through on previous recommendations to sponsor and advocate for developing joint EOD doctrine. With joint doctrine that specifies the role of EOD in joint operations and provides a consistent lexicon for joint planning, EOD participation in joint operations could be more efficient and effective. In addition, with joint doctrine regarding future requirements for the joint EOD capability, the services will have more complete information to inform their force structure planning and provide adequately trained and experienced forces to meet future requirements. We recommend that the Secretary of Defense take the following two actions: To improve the Army’s and Marine Corps’ ability to ensure adequate support of their EOD forces within expected budgets, direct the Secretaries of the Army and the Navy to collect data on costs associated with supporting their current EOD forces. To enhance the future employment of EOD forces in joint combat operations, direct the Chairman of the Joint Chiefs of Staff to develop joint EOD doctrine that would guide combatant commanders’ planning and clarify joint operational roles and responsibilities. We provided a draft of this report to the Secretary of Defense for comment. An official from the Office of the Assistant Secretary of Defense for Special Operations and Low Intensity Conflict provided oral comments on the draft indicating that DOD concurred with our report and both of our recommendations. We are sending copies of this report to the appropriate congressional committees. We are also sending copies to the Secretary of Defense; the Secretaries of the Army, Navy, and Air Force; the Chairman of the Joint Chiefs of Staff; the Assistant Secretary of Defense for Special Operations and Low Intensity Conflict; the Commandant of the Marine Corps; and the Director of the Joint Improvised Explosive Device Defeat Organization. This report will also be available at no charge on our website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me on (404) 679-1816 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. The scope of our review on Explosive Ordnance Disposal (EOD) forces included the Office of the Secretary of Defense, the Joint Staff, and the military services, including select EOD units from each service, and other DOD organizations that utilized or impacted EOD forces. We obtained relevant documentation and interviewed key officials from the following offices: Office of the Assistant Secretary of Defense for Special Operations J34 – Deputy Directorate for Antiterrorism/Homeland Defense; J5 – Strategic Plans and Policy Directorate; and J8 – Force Structure, Resources, and Assessment Directorate; Department of the Army Office of the Deputy Chief of Staff (Operations, Plans, and Training) EOD & Render Safe Procedures Branch; U.S. Army Ordnance Corps Explosive Ordnance Disposal Directorate, Fort Lee, Virginia; 20th Support Command, Aberdeen Proving Ground, Maryland; 52nd Ordnance Group (EOD), Fort Campbell, Kentucky; 49th EOD Company, Fort Campbell, Kentucky; 723rd EOD Company, Fort Campbell, Kentucky; and 788th EOD Company, Fort Campbell, Kentucky; Department of the Navy Office of the Chief of Naval Operations, Navy Expeditionary Combat Navy Expeditionary Combat Command, Joint Expeditionary Base Little Creek-Fort Story, Virginia; EOD Group One, Naval Amphibious Base Coronado, California; EOD Mobile Unit One, Naval Base Point Loma, California, EOD Mobile Unit Three, Naval Amphibious Base EOD Mobile Unit Eleven, Imperial Beach, California, EOD Training and Evaluation Unit One, Naval Base Point EOD Expeditionary Support Unit One, Naval Amphibious Base Coronado, California, and Mobile Diving and Salvage Unit One, Naval Amphibious Base Coronado, California; Executive Manager of EOD Technology and Training, Washington, D.C. Naval EOD Technology Division, Naval Support Facility Indian Naval School EOD, Eglin Air Force Base, Florida; Center for EOD and Diving, Naval Support Activity Panama City, EOD Occupational Field Sponsor, Headquarters, U.S. Marine 1st EOD Company, Camp Pendleton, California; 3rd EOD Company, Okinawa, Japan; and EOD Personnel Supporting the 3rd Marine Aircraft Wing, Marine Corps Air Station Miramar, California; Department of the Air Force EOD Program Directorate; Air Force Civil Engineer Support Agency, Tyndall Air Force Base, 96th Civil Engineering Squadron EOD Flight, Eglin Air Force Joint Improvised Explosive Device (IED) Defeat Organization, Arlington, Virginia; and U.S. Northern Command Joint Force Headquarters, National Capitol Region, Joint EOD – Very Important Persons Protection Support Activity, Fort McNair, Washington, D.C. To determine the extent to which DOD and the services addressed increased demands for the EOD capability, we collected and analyzed data and descriptions from each of the four services’ EOD forces on their traditional military service EOD missions, including the total number of hours dedicated to support the Very Important Persons Protection Support Activity. After receiving the data showing the total number of hours dedicated to support this activity, we reviewed the data and interviewed the U.S. Northern Command official who provided them to assess the data’s reliability. Based on these actions, we determined that these data were sufficiently reliable to include the amount of hours of support EOD personnel provided to this activity. We also requested data on the organizational structure of EOD forces from each of the services; operational tempo of EOD units and the authorized numbers of EOD officers and enlisted personnel within each service from fiscal year 2002 through fiscal year 2012; as well as projected manpower needs through fiscal year 2017. After receiving the authorized EOD personnel data, as identified by the services, we interviewed service officials who had provided it and other subject matter experts to assess the reliability of the data. Based on our review of the personnel data provided and our interviews, we determined that the personnel data were sufficiently reliable to describe growth in numbers of EOD personnel. Additionally, we interviewed key cognizant DOD officials with responsibility for EOD activities across the department—including from the Office of the Secretary of Defense, each of the military services, selected support commands, and units—to gain their perspectives on the operational tempo of EOD forces, use of joint EOD forces for recent EOD combat missions and challenges, and expected future EOD mission requirements. To determine the extent to which DOD and the services have identified funding to resource EOD forces to meet anticipated future requirements, we collected and analyzed available EOD funding data from each of the services for fiscal years 2010 through 2012. We requested that the services provide EOD funding data from the base and overseas contingency operations budgets for specific funding accounts, including military personnel; operation and maintenance; procurement; and research, development, test, and evaluation. Also, we requested that the services identify funding, if any, received from the Joint IED Defeat Organization, or other sources. The comprehensiveness of the data provided by each of the military services varied. After receiving the funding data, we interviewed service officials who had provided them to assess the reliability of the data. Based on our review of the funding data provided and our interviews, we determined that the funding data were incomplete, potentially inaccurate, and not sufficiently reliable to establish a baseline level of DOD’s EOD spending. To determine the extent to which DOD has developed guidance for employing the EOD capability effectively in joint operations, we systematically analyzed 75 unclassified and for official use only documents of existing joint DOD doctrine (Joint Publications) to identify the inclusion of EOD functions. One GAO analyst conducted this analysis, coding the information and entering it into a separate record, and another GAO analyst verified the information for accuracy. All disagreements were resolved by a third GAO analyst. The analysts then tallied the total number of joint DOD doctrinal documents in which EOD functions were included. In addition, we reviewed and analyzed prior DOD reports’ findings about doctrine and examined whether associated recommendations had been implemented. Specifically, we reviewed an EOD report from the Joint Staff and an EOD report analyzing transforming the joint EOD force. We also reviewed guidance on EOD activities from the services, including multi-service tactics, techniques, and procedures guidance. Additionally, we interviewed key EOD officials across DOD to ascertain the extent to which DOD has comprehensive joint EOD guidance and, if so, any potential benefits joint guidance has provided. Moreover, we discussed with DOD and service officials how EOD has been integrated jointly across DOD in areas such as joint operations and the joint training and equipping of EOD forces. Finally, we met with EOD leadership and personnel from selected EOD units in all four military services and conducted 28 group discussions with EOD-qualified team members, team leaders, senior enlisted personnel, and officers to obtain their perspectives on issues related to military service-specific EOD missions, joint combat operations, training, equipment, and operational tempo. We used these data to provide illustrative examples throughout this report. A detailed discussion of how we conducted those group discussions follows, and more details of the themes from the group discussions can be found in appendix II. The group discussions included EOD-qualified personnel from the following military units: 52nd Ordnance Group (EOD), Fort Campbell, Kentucky 49th EOD Company, Fort Campbell, Kentucky; 723rd EOD Company, Fort Campbell, Kentucky; and 788th EOD Company, Fort Campbell, Kentucky. EOD Group One, Naval Amphibious Base Coronado, California EOD Mobile Unit One, Naval Base Point Loma, California; EOD Mobile Unit Three, Naval Amphibious Base Coronado, EOD Mobile Unit Eleven, Imperial Beach, California. 1st EOD Company, Camp Pendleton, California; 3rd EOD Company, Okinawa, Japan; and EOD Personnel Supporting the 3rd Marine Aircraft Wing, Marine Corps Air Station Miramar, California. 96th EOD Flight Civil Engineering Squadron EOD Flight, Eglin Air Force Base, Florida. We selected EOD units to visit based on information from the services for units that had recent deployment experience as well as the ability to provide sufficient quantities of EOD-qualified personnel who would be available to participate in our group discussions. Our overall objective in using the group discussion approach was to obtain insight and perspectives from EOD personnel on training, equipment, operational tempo, joint military operations, and military service-specific responsibilities. Group discussions, which are similar in nature and intent to focus groups, involve structured small group discussions designed to obtain in-depth information about specific issues. The information obtained is such that it cannot easily be obtained from a set of individual interviews. From each location, we requested that each military service provide up to 10 volunteers to participate in our group discussions. We also conducted group discussions separated by rank and position. Specifically, we conducted separate group discussions that were comprised of officers, senior enlisted personnel, team leaders, and team members. At one location, two group discussions included both officers and senior enlisted personnel, and two other group discussions included all available EOD personnel assigned to those particular units. The number of participants per group discussion ranged from 2 to 12. Discussions were held in a semi-structured manner, led by a moderator who followed a standardized list of questions. The group discussions were documented by one or two analysts at each location. Group discussions were conducted between August 2012 and October 2012. We conducted 28 group discussions with EOD-qualified junior enlisted, noncommissioned officer, warrant officer, and commissioned officer personnel from the Army, Navy, Marine Corps, and Air Force. During each discussion, we asked participants to complete a voluntary questionnaire that provided us with supplemental information about each person’s EOD background, including: Rank; EOD qualification level (Basic, Senior, or Master Badge level); Number of deployments to Iraq or Afghanistan; Support provided to or received from another military service; and Support to the Very Important Persons Protection Support Activity. This information provided by participants helped us to ensure that we obtained a wide range of perspectives from qualified EOD personnel with a variety of EOD-related experiences. In total, we met with 188 EOD personnel. Table 1 below shows the composition of our various discussion groups. We performed content analysis of our group discussion sessions in order to identify the themes that emerged during the sessions and to summarize participant statements regarding EOD experiences and perceptions. Specifically, at the conclusion of all our group discussion sessions, we reviewed responses from the discussion groups and created a list of themes. We then reviewed the comments from each of the 28 group discussions and assigned comments to the appropriate themes. One GAO analyst conducted this analysis and a different GAO analyst checked the information for accuracy. Any discrepancies in the assignment of the comments to themes were resolved through discussion by the analysts. The information gathered during our group discussions with EOD personnel represents the responses of only the EOD enlisted and officer personnel present in our 28 group discussions and it is not projectable to other EOD personnel. We conducted this performance audit from May 2012 to April 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In group discussions, we asked EOD personnel for their perspectives on what is working well and what needs improvement with regard to military service-specific missions, joint combat operations, training, equipment, and operational tempo. We analyzed participants’ responses to identify the most common themes, which are summarized below. EOD Organizational Alignment and Career Path: Some participants from each military service raised concerns about where EOD is aligned within their respective military service. Some also noted issues with career paths for EOD-qualified officers. For example, Army, Marine Corps, and Air Force personnel expressed concerns about EOD alignment under the Ordnance Corps (Army) or Engineers (Marine Corps and Air Force). They perceived these alignments as hampering EOD forces’ influence for resourcing and operations. Additionally, Army officers reported that officer career paths are of concern because the EOD specialty is a technical area that is not the same as others in their organization. For example, the Army EOD officer career path is within the Logistics Branch of the Army, and all EOD officers are expected to have logistic skills; however, an Army officer noted that learning about managing fuel farms has nothing to do with EOD. Conversely, Navy participants viewed the officer career path as generally positive and noted that enabling an officer to stay within EOD for his or her entire career was beneficial to the EOD force. Training: Participants in the majority of the group discussions said they felt positive about the training they received, and almost all of our group discussion participants reported that they wanted more training or more time for training. Some group discussion participants reported concerns about not being able to train with the same types of equipment, such as robots, as they would be using when deployed. Both Army and Marine Corps personnel expressed the desire for more training in homemade explosives and casualty care. Additionally, Army and Navy personnel reported that they needed additional access to training ranges. Army participants at Fort Campbell noted that access to training ranges is difficult to provide because there are not enough training ranges for all EOD companies to train at their installation. Likewise, Navy personnel in the San Diego area noted that they have issues accessing training ranges, particularly for exercises involving demolition. Frequency of deployments: Participants in our discussions expressed mixed opinions regarding operational tempo. Some EOD personnel felt that the high pace of deployments and other missions put stress on EOD personnel and their families. For example, some personnel felt that EOD team leaders are getting burned out because they are always away from home. In addition, some personnel said that to cover missions clearing training ranges they had to turn down annual leave or cancel doctors’ appointments because of staffing shortages. Also, some personnel noted that during their 12 months home between overseas deployments they are often away from their families because of the need to attend training or sometimes to travel in support of the Very Important Persons Protection Support Activity. Some personnel liked a high operational tempo and voiced concerns about having too much down time as operational tempo slows. Support to Very Important Persons Protection Support Activity: Participants in many of our group discussions noted that the time they spent in support of the Very Important Persons Protection Support Activity exacerbated their high operational tempo, and some raised the concern that these missions are not an effective use of EOD skills. EOD personnel reported that these missions had a negative effect on EOD personnel and their families by taking them away from home too often. Further, they noted that to support these missions they missed training for overseas missions. Moreover, some participants said that the Very Important Persons Protection Support Activity missions were not a good use of the EOD skill set or did not use EOD to its full potential because EOD personnel are asked only to identify potentially explosive hazards but are not allowed to disarm anything that is found. Funding for Training EOD Units: Some participants in our group discussions reported concerns about the adequacy of funding for EOD training. For example, some Army EOD personnel said that they sometimes had to buy materials needed for training aides, such as electrical tape and electronic parts. Moreover, Army and Marine Corps personnel reported that specialized training, such as post-blast analysis or homemade explosive courses, is expensive, which limits the number of people who can take the training. In particular, Marine Corps personnel expressed concerns that money for training might be scaled back in the future. Incentive and Special Duty Assignment Pays: Participants in our discussion groups noted that incentive pay, special duty assignment pay, and retention bonuses are important to EOD personnel, and that availability of such pays is a factor in retaining EOD personnel, but that availability of incentives varied across the military services. Some participants from the Marine Corps raised issues regarding incentive pay, special duty assignment pay, and retention bonuses. Marine Corps personnel stated that, unlike the other military services, the Marine Corps does not provide any additional pays or bonuses for EOD, and they felt that not receiving these types of pay like the other services constituted an issue of equity. In addition to the contact named above, Margaret G. Morgan, Assistant Director; Tarik Carter; Simon Hirschfeld; Shvetal Khanna; James Krustapentus; James E. Lloyd III; Michael Shaughnessy; Michael Silver; Amie Steele; Tristan T. To; Cheryl Weissman; and Sam Wilson made key contributions to this report.
DOD has relied heavily on the critical skills and capabilities of EOD forces to counter the threat from improvised explosive devices on battlefields in Iraq and Afghanistan. The House Armed Services Committee directed DOD to submit a report on EOD force structure planning and directed GAO to review DOD's force structure plan. DOD's report provided little detail. GAO examined to what extent (1) DOD and the services have addressed increased demands for the EOD capability and identified funding to meet future requirements; and (2) DOD has developed guidance for employing the EOD capability effectively in joint operations. GAO evaluated DOD's report and EOD guidance; analyzed data on EOD missions, personnel, and funding; and interviewed DOD and service officials to gain perspectives from EOD personnel and managers. Explosive Ordnance Disposal (EOD) forces grew over the past 10 years to meet wartime and other needs, but the Department of Defense (DOD) does not have the data needed to develop a funding strategy to support future EOD force plans. To meet increased demands for EOD personnel, the services increased their EOD forces from about 3,600 personnel in 2002 to about 6,200 in 2012. Anticipating that the need for EOD will continue as forces withdraw from ongoing operations, the services intend to maintain their larger size. The Navy and Air Force have data on the baseline costs for some or all of their EOD activities, but the Army and Marine Corps do not have complete data on spending for EOD activities. Therefore, DOD does not have complete data on service spending on EOD activities needed to determine the costs of its current EOD capability and to provide a basis for future joint planning. Until all the services have complete information on spending, service and DOD leadership will be unable to effectively identify resource needs, weigh priorities, and assess budget trade-offs. EOD forces from all four services have worked together in Iraq and Afghanistan and the services have developed guidance on tactics and procedures for EOD forces, but challenges persist because DOD has not institutionalized joint EOD doctrine through a joint publication. Joint doctrine facilitates planning for operations and establishes a link between what must be accomplished and the capabilities for doing so. DOD studies have noted commanders' limited awareness of EOD capabilities during combat operations, and EOD personnel reported challenges they attributed to non-EOD forces' lack of understanding of EOD operations. Several DOD organizations have responsibilities for some EOD functions, but no entity has been designated as the focal point for joint EOD doctrine. Joint doctrine could help leaders identify EOD capability requirements and better position combatant commanders in their use of EOD forces in future operations. Joint doctrine that is developed and approved as authoritative guidance would enhance the EOD forces' ability to operate in an effective manner, and would better position the services to identify capability gaps in meeting service, joint, and interagency requirements; to invest in priority needs; and to mitigate risks. To better enable DOD to plan for funding EOD mission requirements and enhance future use of EOD forces in joint combat operations, GAO recommends that DOD direct (1) the Secretaries of the Army and the Navy to collect data on current Army and Marine Corps EOD funding, and (2) the Chairman of the Joint Chiefs of Staff to develop joint EOD doctrine that would guide combatant commanders' planning and clarify joint operational roles and responsibilities. In oral comments on a draft of this report, DOD concurred with the recommendations.
Under the NCLBA, the Secretary of Education had the authority to waive many statutory and regulatory requirements for states, school districts, and other entities that received funds under a program authorized by the law, provided that certain conditions were met. In September 2011, Education introduced the Flexibility initiative and invited states to request a waiver for flexibility from certain NCLBA requirements in effect at the time. For example, Education offered to waive requirements related to the timeline for determining whether states, districts, and schools were making adequate yearly progress toward improved academic achievement for all students, including specified subgroups. (See app. II for a full list of NCLBA provisions that could be waived under the Flexibility initiative.) To be approved for a Flexibility waiver, Education required states to address certain principles for improving elementary and secondary education, as seen in table 1. Education’s Student Achievement and School Accountability Office was responsible for administering the Flexibility initiative until October 2014. At that time, the Student Achievement and School Accountability Office became part of the newly-created Office of State Support, which assumed responsibility for administering the initiative. As part of Education’s process for reviewing and approving states’ requests for waivers under the Flexibility initiative, Education invited states to submit their requests in one of several “windows” between 2011 and 2014. Almost every state applied for a waiver during one of these windows. Generally, Education approved states to implement their requests for a certain number of years. As of April 2016, Education had approved requests for Flexibility waivers in 43 states. In November 2014, Education invited states that had received approval for Flexibility waivers for the 2014-2015 school year to submit a request to renew their waivers for an additional 3 years, or through the end of the 2017-2018 school year. As shown in figure 1, Education established a process in which states requested Flexibility waivers, states’ requests were peer-reviewed, and Education made final decisions. According to Education officials, the review and decision process was focused on whether states’ requests were consistent with Flexibility principles. Education convened peer review panels to evaluate states’ initial Flexibility waiver requests and suggest ways to strengthen a state’s plan for implementing the principles of the Flexibility initiative. For example, peer reviewers in some cases suggested strengthening plans to ensure that students from racial and ethnic subgroups were sufficiently included in school accountability systems. Ultimately, Education used the results of peer review and the department’s internal analysis to inform its final decision of whether or not to approve states’ Flexibility waiver requests. After completing the initial review and decision process, Education conducted a monitoring process to oversee Flexibility waiver implementation and identify any areas in which states needed additional support. The first part of the monitoring process (referred to as “Part A” monitoring) was designed to provide Education with a more in-depth understanding of a state’s goals and approach to implementing its Flexibility waiver and to ensure that the state had the critical elements in place to begin implementing its plan. The second part of the monitoring process (referred to as “Part B” monitoring) was designed to enable Education to review state implementation of the plan and follow up from the initial monitoring. By establishing a process to review, approve, and monitor states’ Flexibility waivers, Education identified challenges to states’ ability to fully implement their waivers, as shown in table 2. Recognizing that Flexibility waivers affected multiple significant aspects of state and local educational systems, Education took steps that enhanced its ability to identify implementation risks. For example, officials in Education’s Office of State Support (the office responsible for the initiative) told us they coordinated with other Education offices to identify findings or concerns regarding how states were implementing other education programs that might affect a state’s waiver implementation. Education’s efforts to identify implementation risks were consistent with standards for internal control in the federal government, which define risk assessment as identifying and analyzing relevant risks associated with achieving program objectives. Agency management is to comprehensively identify risks and consider any effects they might have on the agency’s ability to accomplish its mission for all projects, such as the Flexibility initiative. Education asked states to include information in their initial Flexibility waiver requests about how they consulted with teachers and their representatives and other stakeholders, such as parents and organizations representing students with disabilities and English learners, in developing their waiver proposals. However, officials we interviewed in two states, as well as an official from the National Conference of State Legislatures, discussed issues related to stakeholder consultation, especially with state legislatures, regarding their Flexibility waiver requests. For example, Washington state was unable to implement a teacher and principal evaluation and support system that included student learning growth as a significant factor. A state official told us they attempted to design a system that would meet the needs of various stakeholders, including teachers, but ultimately the system was not implemented because, according to Education, the state legislature did not approve the changes needed to put the system in place. In addition, Arizona officials said state laws and rules from the state board of education limited their ability to implement an accountability system that was consistent with their Flexibility waiver request. Under the ESSA, states will be required to develop their Title I state plans with timely and meaningful consultation with the governor and members of the state legislature and state board of education, among others. Of the 43 states with Flexibility waivers, we identified 12 states that faced multiple significant challenges throughout the initiative, affecting their ability to fully implement their waivers: Alabama, Arizona, Florida, Louisiana, Massachusetts, Nevada, New Hampshire, Ohio, Oklahoma, Pennsylvania, South Dakota, and Texas. As shown in table 3, these 12 states had at least two of the following designations: Education included conditions when approving the state’s initial Education found during monitoring that the state was not implementing an element of its Flexibility waiver consistent with its approved request meeting Education’s expectations for establishing systems and processes—particularly for monitoring schools and school districts—that supported waiver implementation; or Education included conditions when renewing the state’s Flexibility waiver. Some of these states were unable to fully address the challenges Education identified when their waiver was initially approved. For example, Education identified risks related to Pennsylvania’s capacity to monitor interventions in “focus schools” prior to approving the state’s Flexibility waiver and subsequently found during Part B monitoring (nearly 2 years later) that the state lacked a plan to conduct such monitoring. Pennsylvania officials told us that, according to Education officials, these weaknesses resulted from not documenting how interventions in focus schools were consistent with the state’s plan to improve student achievement in these schools. To help manage these challenges, Education included conditions when approving and renewing these states’ Flexibility waivers and provided technical assistance. During Part B monitoring, Education found that most of these states were not meeting Education’s expectations for establishing monitoring systems and processes that support implementation of their Flexibility waivers. Many of these states were particularly challenged to develop systems for overseeing local school districts and schools. Specifically, during Part B monitoring, Education found that 8 of the 12 states we identified as facing multiple challenges did not meet expectations regarding systems for monitoring local implementation of their Flexibility waivers (see table 4). According to Education officials, many states were not implementing monitoring activities consistent with their approved Flexibility waivers and the key Flexibility principles. For example, Education found that Alabama did not have a formal monitoring mechanism to ensure its interventions in priority schools, focus schools, and other Title I schools met the requirements of the Flexibility initiative; and New Hampshire did not monitor its districts’ adoption and implementation of college- and career- ready standards. During the initial Flexibility waiver review and decision process and Part B monitoring, Education asked states about their plans to monitor local implementation and asked for documentation, such as monitoring schedules or reports. Education officials told us that state monitoring of local implementation is a persistent challenge across many education programs, and said that the possible reasons states continue to experience challenges related to monitoring include staff capacity and staff turnover at state departments of education. Education officials told us they could help states strengthen their monitoring efforts by disseminating best practices but have not yet done so because of time and resource constraints. Education did not establish specific timeframes for providing final Part B monitoring reports to states. According to our analysis of Education’s documentation, it took over 4 months, on average, to provide states with final Part B monitoring reports; for 10 states, it was over 6 months. Education officials told us that many factors affected the time frames for finalizing its monitoring reports, such as the complexity of the approaches being used by a state to implement its Flexibility waiver, the need to balance this work with other high-priority work being done by department staff, and the U.S. government shutdown in October 2013. Recognizing that the length of time Education takes to notify a state about monitoring findings affects how long it will take a state to address any implementation risks the department identified, Education officials told us they provided draft reports to states earlier in the process and gave them an opportunity to provide technical edits to the draft reports. Although 12 states faced multiple challenges throughout the Flexibility waiver initiative, Education has not yet evaluated its process for reviewing, approving, and overseeing Flexibility waivers. Education officials told us they intend to identify lessons from the Flexibility initiative, particularly with regard to technical assistance for and oversight of state monitoring efforts and that such lessons learned would help them better support states with developing and implementing state plans for ESSA implementation. For example, Education officials told us they plan to determine how they can improve their use of the peer review process for ESSA state plans. As of yet, however, Education has not evaluated its oversight of Flexibility waivers and did not provide us with a time frame for doing so. According to standards for internal control in the federal government, agencies should consider lessons learned when planning agency activity, as doing so can help an agency communicate acquired knowledge more effectively and ensure that beneficial information is factored into planning, work processes, and activities. As Education begins its efforts to implement the ESSA, it has the opportunity to learn from its experiences with the Flexibility initiative. Without identifying lessons from oversight of the waiver process, Education may miss opportunities to better support ESSA implementation. The Flexibility waiver initiative affected multiple, complicated aspects of state and local systems for elementary and secondary education and, thus, was a significant undertaking by the department. In implementing its Flexibility initiative, Education’s efforts identified many key challenges states faced in implementing their waiver requests, such as incomplete systems for school accountability or teacher and principal evaluation. We found that 12 of the 43 states with Flexibility waivers faced significant challenges in addressing risks identified throughout the initiative, affecting their ability to fully implement their waivers. These challenges included ensuring states were effectively monitoring their districts and schools, which is a key aspect of program effectiveness, and an area where the department has identified oversight issues across programs. The waivers granted under Education’s Flexibility initiative will terminate on August 1, 2016, and states are preparing to develop and implement new Title I plans under the newly reauthorized law, the ESSA. Education continues to develop its oversight and technical assistance strategies for implementing the ESSA, which includes different requirements related to school accountability, among other things. Absent an evaluation of its oversight process for the Flexibility initiative to identify lessons learned, Education may miss an opportunity to strengthen its monitoring and oversight of states’ implementation of plans under ESSA and better support them in the areas that have presented significant challenges. To better manage any challenges states may face implementing the ESSA, we recommend that the Secretary of Education direct the Office of State Support to evaluate its oversight process in light of the challenges states encountered in implementing the Flexibility initiative to identify lessons learned and, as appropriate, incorporate any lessons into plans for overseeing the ESSA, particularly around issues such as the design and implementation of states’ monitoring systems. We provided a draft of this report to Education for its review and comment. Education’s written comments are reproduced in appendix III. Education also provided technical comments, which we incorporated into the report, as appropriate. In its written comments, Education agreed that it is important to continuously evaluate its work and to consider ways to improve its efficiency and effectiveness and cited examples of the agency doing so during ESEA Flexibility implementation. For example, Education said it developed the Office of State Support, in part based on lessons learned while implementing the Flexibility initiative. In addition, Education said that since the ESSA was enacted in December 2015, it has continued to informally evaluate ESEA Flexibility implementation and oversight and cited several examples relevant to ESEA Flexibility and other Education programs and initiatives. For example, Education said that it has been considering changes to its planned performance review system designed to support state implementation of the Flexibility initiative and other programs. Further, the agency provided new information in its letter, telling us that it is piloting quarterly calls between Education program officers and states and piloting a fiscal review in eight states focused on components of the law it says did not change significantly between NCLBA and ESSA. As Education continues its efforts to evaluate lessons learned from the Flexibility initiative—including the peer review process— and apply them to its oversight of ESSA, we encourage Education to incorporate these lessons into how it oversees the design and implementation of states’ monitoring systems which are key to the success of ESSA’s accountability provisions. We believe that by doing so, Education will be better positioned to support states as they implement the law’s new requirements. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of Education, and other interested parties. In addition, the report will be available at no charge on the GAO web site at http://www.gao.gov. If you or your staff should have any questions about this report, please contact me at (617) 788-0580 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. In prior work, we interviewed selected states regarding the benefits and challenges of requesting and implementing waivers granted through the Department of Education’s (Education) Flexibility initiative under the Elementary and Secondary Education Act of 1965 (ESEA), as amended by the No Child Left Behind Act of 2001 (NCLBA). To develop this information, we collected information from 20 states by conducting interviews with 15 states that had waivers and 5 states that did not have waivers at the time of our review. We presented this information orally to congressional requesters in August 2015. The following summarizes that briefing. Officials from 6 states told us the waivers allowed districts to better identify the lowest-performing schools and better target their resources. Officials from 10 states told us the waivers helped them develop a single school accountability system or align their existing federal and state school accountability systems to help streamline data collection and reporting. Officials from 14 states told us that implementing teacher and principal evaluation systems was a challenging aspect of their waivers, in some cases due to lack of stakeholder support for needed legislative or collective bargaining changes, or difficulty in meeting Education’s requirements for incorporating student growth into teacher and principal evaluation systems. Officials from 9 states expressed concerns that Education’s timeframes to implement waiver requirements were too rigid and accelerated for such large-scale reforms. Officials from 5 of these states told us that timelines for implementing teacher and principal evaluation systems were especially challenging. Officials in 4 states without waivers told us they do not have waivers because they could not come to agreement with Education about key aspects of requirements for accountability or teacher and principal evaluation systems. Officials in 3 states told us Education staff were responsive to day-to- day emails and phone calls; officials in 3 other states told us Education was slow to provide more substantive oversight, such as formal monitoring. Officials in 8 states told us that, because of staff turnover at Education during the waiver initiative, there was often an incomplete transfer of information from one staff person to the next, which required state officials to explain previous discussions or decisions, frustrating states, and wasting time. Appendix II: No Child Left Behind Act of 2001 (NCLBA) Provisions Waived Through the Flexibility Initiative the Every Student Succeeds Act (ESSA), enacted December 10, 2015. As a result, the provisions referenced in this table do not reflect current law. Under the ESSA, all approved Flexibility waivers will terminate on August 1, 2016; other changes made by ESSA will be phased in over time. After the initiative began, Education determined this provision was unnecessary and did not include it for states requesting to renew their waivers. States could request these optional flexibilities when renewing their Flexibility waivers. In addition to the contact named above, Scott Spicer (Assistant Director), Jason Palmer (Analyst-in-Charge), Sarah Cornetto, Brian Egger, Jean McSween, Linda Siegel, and Carmen Yeung made key contributions to this report. Also contributing to this report were James Bennett, Deborah Bland, Holly Dye, Nisha Hazra, John Lack, Avani Locke, and David Perkins.
Beginning in 2011, Education used its statutory authority to invite states to apply for waivers from certain provisions in the ESEA through its Flexibility initiative. To receive Flexibility waivers, states had to agree to meet other requirements related to college- and career-ready expectations, school accountability and support, and effective instruction. Education approved Flexibility waivers for 43 states. In December 2015, Congress reauthorized the ESEA which modified Education's waiver authority. GAO was asked to review Education's Flexibility initiative. GAO examined the extent to which Education assessed states' ability to fully implement their Flexibility waivers and the process it used to oversee the waivers. GAO reviewed relevant federal laws, guidance, and key documents related to the Flexibility initiative, such as monitoring reports; and interviewed Education officials. GAO reviewed Education's documents and identified states facing multiple challenges in implementing their waivers. GAO also interviewed officials in five states, selected to reflect a range of challenges states faced in implementing the waivers. Since introducing its Flexibility initiative in 2011—inviting states to request a waiver from certain provisions of the Elementary and Secondary Education Act of 1965 (ESEA) in effect at the time—the Department of Education (Education) has monitored states' efforts and identified challenges to states' ability to fully implement their waivers. According to GAO's analysis of Education letters and monitoring reports, 12 of the 43 states with Flexibility waivers faced multiple challenges that affected their ability to fully implement their waivers. Education used a risk assessment process to document these challenges throughout the waiver approval, monitoring, and renewal phases (see table). For example, Education identified risks with one state's capacity to oversee and monitor schools needing improvement prior to approving the state's waiver in 2013 and noted similar issues, as a result of monitoring, in 2015. Overseeing local districts and schools was particularly challenging for states, according to GAO's analysis of Education documents. Meanwhile, Education has not yet evaluated its process to review, approve, and monitor the Flexibility waivers given to states or incorporated any relevant lessons learned into its plans for implementing the December 2015 reauthorization of the ESEA. According to federal internal control standards, agencies should consider lessons learned when planning agency activities. As Education begins to implement the new law, it has an opportunity to learn from its experiences with the Flexibility initiative and incorporate any applicable lessons learned. Absent such an evaluation, Education may miss opportunities to better oversee state implementation of the new law. GAO recommends that Education evaluate its Flexibility initiative oversight process to identify lessons learned and incorporate any applicable lessons into its plans for overseeing state implementation of the new law. Education generally agreed and outlined steps to address the recommendation.
Treasury’s Office of Homeownership Preservation within the Office of Financial Stability (OFS), is responsible for overseeing the TARP-funded programs that are intended to help prevent avoidable foreclosures and preserve homeownership. MHA is the primary TARP initiative for addressing these issues. Treasury allocated $29.9 billion in TARP funds to MHA, which consists of several programs designed to help struggling homeowners prevent avoidable foreclosures. HAMP first-lien modifications. These loan modifications are available to qualified borrowers who took out loans on or before January 1, 2009. Only single-family properties (one to four units) with mortgages no greater than $729,750 for a one-unit property are eligible. HAMP uses a standardized net present value (NPV) model to compare expected cash flows from a modified loan to the same loan with no modification, using certain assumptions. Treasury also shares some of the costs of modifying mortgages with mortgage holders/investors and provides incentives of up to $1,600 to servicers for completing modifications. The Home Price Decline Protection Incentive provides investors with additional incentives to modify loans on properties located in areas where home prices have recently declined and where investors are concerned that price declines may persist. The original HAMP first-lien modification structure, or HAMP Tier 1, is generally available to qualified borrowers who occupy their properties as their primary residence and whose first-lien mortgage payment is more than 31 percent of their monthly gross income, calculated using In June 2012, Treasury the front-end debt-to-income (DTI) ratio.made a second type of first-lien loan modification available under HAMP. HAMP Tier 2 is available for either owner-occupied properties or rental properties, and borrowers’ monthly mortgage payments prior to modification may be less than 31 percent DTI. Mortgages secured by owner-occupied properties must be in imminent default or be delinquent by two or more payments to be considered for either HAMP Tier 1 or HAMP Tier 2. For mortgages secured by rental properties, only those that are two or more payments delinquent are eligible for HAMP Tier 2. Under both HAMP Tier 1 and Tier 2, borrowers must demonstrate their ability to pay the modified amount by successfully completing a trial period of at least 3 months (or longer if necessary) before a loan is permanently modified and any government payments are made. The Second Lien Modification Program (2MP). Under 2MP, when a borrower’s first lien is modified under HAMP and the servicer of the second lien is a 2MP participant, that servicer must offer a modification and/or full or partial extinguishment of the second lien. A servicer need not service the related first lien in order to participate in 2MP. Treasury provides incentive payments to second-lien mortgage holders in the form of a percentage of each dollar of principal reduction on the second lien. Treasury has doubled the incentive payments offered to second-lien mortgage holders for 2MP permanent modifications that include principal reduction and have an effective date on or after June 1, 2012. Principal Reduction Alternative (PRA Program). In October 2010, PRA took effect as a component of HAMP to give servicers more flexibility in offering relief to borrowers whose homes were worth significantly less than their mortgage balance. Under PRA, Treasury provides mortgage holders/investors with incentive payments in the form of a percentage of each dollar of principal reduction. Treasury has tripled the PRA incentive amounts offered to mortgage holders/investors for permanent modifications with trial periods effective on or after March 1, 2012. Servicers of nonenterprise loans must evaluate the benefit of principal reduction for mortgages with a loan-to-value (LTV) ratio that is greater than 115 percent when evaluating a homeowner for a HAMP first-lien modification. Servicers must adopt and follow PRA policies that treat all similarly situated loans in a consistent manner, but are not required to offer principal reductions, even when NPV calculations show that the expected value of the loan’s cash flows would be higher with a principal reduction than without. When servicers include principal reductions in modifications under PRA, the principal reduction amount is initially treated as non-interest-bearing principal forbearance. If the borrower is in good standing on the first, second, and third anniversaries of the effective date of the modification’s trial period, one-third of the principal reduction amount is forgiven on each anniversary. Home Affordable Foreclosure Alternatives (HAFA) Program. Under this program, servicers offer foreclosure alternatives (short sales and deeds-in-lieu) to borrowers who meet the eligibility requirement for HAMP and cannot be approved for a HAMP trial modification, do not successfully complete a HAMP trial modification, default on a modification (miss two or more consecutive payments), or request a short sale or deed-in-lieu. to investors, servicers, and borrowers for completing these foreclosure alternatives. Under a deed-in-lieu of foreclosure, the homeowner voluntarily conveys all ownership interest in the home to the lender as an alternative to foreclosure proceedings. In a short sale, a homeowner sells a house rather than going into foreclosure. Proceeds from short sales are generally less than the mortgage amount, so the homeowner must have the lender’s permission for the sale. Under a HAFA short sale, a lender must forgive the shortfall between the loan balance and net sales proceeds and release the lien on the subject property. Under HAFA, a deed-in-lieu must satisfy the borrower’s entire mortgage obligation in addition to releasing the lien on the subject property. unemployment. Borrowers can receive a 12-month forbearance period during which monthly mortgage payments are reduced or suspended. Servicers can extend the forbearance period at their discretion if the borrower is still unemployed. Borrowers should be considered for a HAMP loan modification following reemployment or expiration of the forbearance period or a foreclosure alternative, such as the HAFA program. No TARP funds are provided to servicers under this program. In 2009, Treasury entered into agreements with Fannie Mae and Freddie Mac to act as its financial agents for MHA. Fannie Mae serves as the MHA program administrator and is responsible for developing and administering program operations, including registering, executing participation agreements with, and collecting data from servicers and providing ongoing servicer training and support. Freddie Mac serves as Treasury’s compliance agent and has designated an independent division, Making Home Affordable Compliance (MHA-C), which is responsible for assessing servicers’ compliance with program guidelines, including conducting onsite and remote servicer loan file reviews and audits. The Equal Credit Opportunity Act (ECOA) and the Fair Housing Act (collectively, fair lending laws) prohibit discrimination in making credit decisions. Specifically, ECOA prohibits creditors from discriminating against credit applicants on the basis of race, color, religion, national origin, sex, marital status, or age, or because an applicant receives income from a public assistance program or has in good faith exercised The Fair Housing any right under the Consumer Credit Protection Act.Act prohibits discrimination in connection with real estate-related transactions by direct providers of housing, as well as other entities whose discriminatory practices, among other things, make housing unavailable to persons because of race or color, religion, sex, national origin, familial status, or disability. Under one or both of the fair lending laws, a lender may not, because of a prohibited basis: fail to provide information or services, or provide different information or services regarding any aspect of the lending process, including credit availability, application procedures, or lending standards; discourage or selectively encourage applicants with respect to inquiries about or applications for credit; refuse to extend credit or use different standards in determining whether to extend credit; vary the terms of credit offered, including the amount, interest rate, duration, or type of loan; use different standards to evaluate collateral; treat a borrower differently in servicing a loan or invoking default remedies; use different standards for pooling or packaging a loan in the secondary market or for purchasing loans; use different standards in collecting indebtedness; or use different standards in modifying existing loans. Responsibility for federal oversight and enforcement of the fair lending laws is shared among eight agencies: the Department of Housing and Urban Development (HUD), Department of Justice (DOJ), the Federal Trade Commission (FTC), the Bureau of Consumer Financial Protection (CFPB), and the four prudential regulators, which supervise depository institutions. The four prudential regulators are the Federal Deposit Insurance Corporation (FDIC), the Board of Governors of the Federal Reserve System (Federal Reserve), the Office of the Comptroller of the Currency (OCC), and the National Credit Union Administration. Under the Fair Housing Act, HUD investigates all complaints alleging discrimination and may initiate investigations and file administrative complaints against any entity involved in residential real-estate related transactions, including independent mortgage lenders or any other lender, such as depository DOJ, which institutions that HUD believes may have violated the act.has enforcement authority for both ECOA and the Fair Housing Act, may initiate investigations of any creditor—whether a depository or nondepository lender—under its own independent authority or based on referrals from other agencies. CFPB has supervisory and primary enforcement authority under ECOA over mortgage servicers, all insured depository institutions with assets greater than $10 billion and their affiliates, and certain nondepository lenders (including independent mortgage originators). The four prudential regulators generally have ECOA oversight responsibilities for insured depository institutions with assets of $10 billion or less, as well as certain subsidiaries and affiliates of those institutions. Each of the prudential regulators has authority to examine the institutions it supervises for compliance with the Fair Housing Act. The prudential regulators conduct examinations of institutions they oversee to assess their compliance with fair lending laws and regulations. Under ECOA, prudential regulators and CFPB are required to refer lenders to DOJ if there is reason to believe that a lender has engaged in a pattern or practice of discouraging or denying applications for credit in violation of ECOA. A prudential regulator that identifies a possible ECOA violation, that may also be a Fair Housing Act violation, must notify HUD about the potential violation if the regulator does not refer the matter to DOJ. Under the Fair Housing Act, HUD is required to transmit information to DOJ regarding any complaint in which there is reason to believe that a pattern or practice of violations has occurred or that a group of persons has been denied rights under the Fair Housing Act and that the matter raises an issue of general public importance. Title VI of the Civil Rights Act of 1964 provides that no person shall be subjected to discrimination on the basis of race, color, or national origin under any program or activity that receives federal financial assistance. In certain circumstances, failure to ensure that LEP persons can effectively participate in and benefit from federally assisted programs and activities may violate the prohibition under Title VI against national origin discrimination. Executive Order 13166, issued in 2000, addresses the application of Title VI’s prohibition on national origin discrimination in connection with federally conducted and assisted programs and activities. The Executive Order requires that federal agencies examine the services they provide and develop and implement systems by which individuals with limited proficiency in English can access agency programs and services. While the Executive Order does not prescribe specific approaches to language access services, it does require federal agencies to prepare plans (referred to as LEP plans) outlining the steps they will take to ensure that eligible LEP persons can meaningfully access their programs and activities. With respect to recipients of federal financial assistance, DOJ issued guidance which states that recipients should provide LEP individuals with “meaningful access” to their programs, services, and activities. Rather than express uniform rules of compliance, the guidance suggests that agencies assess whether recipients have provided meaningful access through consideration of factors such as the number or proportion of LEP persons eligible to be served or likely to be encountered by the program or recipient; the frequency with which LEP persons come in contact with the program; the nature and importance to people’s lives of the program, activity, or service provided by the recipient; and the resources available to the recipient and the costs of language access. The intent of DOJ’s LEP guidance is to ensure meaningful access by LEP persons to critical programs, services, and activities. HAMP participation levels—number of new permanent modifications added on a monthly basis—have made it uncertain whether Treasury will disburse the nearly $30 billion it has obligated to help borrowers avoid foreclosure. Treasury has taken several steps to increase participation, such as extending the program deadline, expanding program eligibility criteria through HAMP Tier 2, and providing funding to counseling agencies to assist homeowners with completion and submission of application packages (intake project). Since the implementation of HAMP Tier 2 in June 2012, the number of new HAMP modifications started each month has been relatively stable through November 2013. Treasury has recently begun to assess the performance of counseling agencies participating in the intake project, which has been extended to provide funding for packages submitted through September 2014. Treasury has reported that about 1.3 million borrowers have received permanent loan modifications under HAMP as of November 30, 2013. However, as shown in figure 1, participation in HAMP, as measured by trial and permanent modifications started each month, peaked in early 2010, generally declined in 2011, and has remained relatively steady through November 2013. Treasury made several changes to HAMP to address barriers to borrower participation, such as extending the application deadline for new HAMP modifications to December 2015 and expanding eligibility criteria for program participation. In particular, Treasury expanded the pool of homeowners potentially eligible to be assisted through the launch of HAMP Tier 2 in June 2012. HAMP Tier 2 expanded eligibility to various borrowers previously ineligible for HAMP, including borrowers with mortgages secured by “rental property” and borrowers with a wider range of debt-to-income ratios. HAMP Tier 2 appears to have helped stem the decline in the number of new HAMP modifications added on a monthly basis. More than one-fourth of the permanent modifications started in November were Tier 2 modifications (3,460 out of 12,996 modifications). Through November 2013, a cumulative total of 29,134 borrowers had entered into a HAMP Tier 2 permanent modification representing about 11 percent of all permanent modifications started since the implementation of Tier 2 in June 2012. Tier 2 trial modifications represented about 18 percent of all trial modifications started since June 2012. When HAMP was first announced in February 2009, Treasury had developed an internal projection that up to 3 million to 4 million borrowers who were at risk of default and foreclosure could be offered a loan modification under HAMP. However, we subsequently reported that because of the unsettled dynamics of the mortgage market and overall economic conditions, actual outcomes may well be different from the projection. Further, Treasury stated to us that the number of potentially eligible borrowers has shrunk steadily since the beginning of the program, as has the number of delinquent borrowers across the mortgage industry, generally. Extending the deadline for HAMP applications and expanding program eligibility may provide more borrowers the opportunity to participate in the programs. However, because the number of borrowers that have received permanent modifications as of November 30, 2013 (1.3 million) is well below Treasury’s initial estimate of 3 million to 4 million and the pool of estimated HAMP eligible borrowers is declining, it is unclear whether Treasury will disburse all the funds it has obligated to MHA. As of November 30, 2013, $7.0 billion (23 percent) of the 29.9 billion set aside for MHA had been disbursed. According to Treasury, if all active modifications made as of November 30, 2013, in association with MHA were to remain current and receive incentives for the full 5 years, $13.6 billion in incentives will ultimately be disbursed. However, this estimate does not take into account modifications that borrowers enter into after November 2013 through the program’s deadline of December 31, 2015 nor does it consider the impact of redefaults on projected outlays. The Congressional Budget Office (CBO) has estimated that Treasury will ultimately disburse much less than the $29.9 billion currently obligated for MHA. In its May 2013 TARP update report, CBO estimated that only $16 billion, (about 53 percent) for all of the TARP-funded housing programs (MHA, HHF, and FHA Short Refinance Program), would likely be disbursed over those programs’ lifetimes.us that about $11 billion of their estimate was attributable to HAMP. CBO’s estimate assumed that participation rates would continue at the current pace and that redefault rates on modifications would remain consistent regardless of the year in which the modification was started. However, CBO’s May 2013 estimate did not consider the impact of the 2- year extension of MHA through 2015. Treasury officials told us that because of the uncertainty in uptake due to the constantly changing economic environment, potential program changes, and in order to be conservative in their forecasts, they continue to assume that the entire $29.87 billion currently allocated for MHA will be used. In May 2013, Treasury launched its MHA Outreach and Borrower Intake Project in “an effort to ensure that every potential borrower has a chance to be considered for foreclosure prevention assistance under MHA.” Treasury entered into an agreement with NeighborWorks to launch a nationwide effort with housing counselors to increase the number of homeowners that successfully request assistance under MHA. The project’s goal is to make more homeowners aware of the full range of options under MHA and to help eligible homeowners successfully complete an MHA assistance application for servicers to consider. Originally the project was scheduled to end in December of 2013, but Treasury extended the project through September of 2014. As a result, it is too early to determine the project’s impact on HAMP participation. The project pays housing counseling agencies to conduct borrower outreach, assess borrowers for eligibility, help eligible homeowners prepare complete application packages, and deliver those packages to MHA servicers. The applications are to be submitted through the Hope LoanPort, an Internet-based document delivery portal that allows servicers to be notified when an application arrives. The Hope LoanPort uses an intake code to indicate whether the counseling agency is eligible for funding provided by Treasury under the project. Participating housing counseling agencies receive a document preparation and submission fee of $450 for each completed initial application package submitted to and accepted by a MHA servicer, even if the borrower does not receive a modification. Additionally, participating agencies receive funding to cover outreach and administrative costs. Initially, Treasury allocated $18.3 million in TARP funds for the MHA Outreach and Borrower Intake Project. Of this allocation, $12.6 million was to cover the costs of the document preparation and submission fee for 20,000 applications, outreach and certain administrative costs incurred by counseling agencies, and supplemental outreach funds to target specific populations that require specialized services. Treasury allocated the remaining $5.7 million to NeighborWorks’ for outreach and administrative costs associated with the project. However, according to NeighborWorks, only two-thirds of the housing counseling agencies eligible to participate in the project have decided to participate and received an application package allocation, resulting in a total of 92 agencies participating with a production goal of 15,318 application packages to be submitted on behalf of borrowers. As a result, nearly 5,000 packages and $2.9 million remain unallocated to counseling agencies. As shown in table 1, NeighborWorks ultimately allocated about $9.2 million in funding to the 92 participating agencies to cover the cost of document preparation fees, outreach, and administration. The MHA Outreach and Borrower Intake Project became effective in May 2013. As of December 31, 2013, counseling agencies had submitted 2,253 initial packages that had been accepted as complete by servicers under the program with another 878 initial packages in the process of being reviewed by the servicers. Document preparation fees associated with these packages totaled about $1.0 million. As of December 2013, NeighborWorks reported to us that they had disbursed over $1.9 million to housing counseling agencies for outreach and NeighborWorks had expended about $779,121 in administration costs associated with the project. To assist agencies in meeting stated production goals, NeighborWorks generates a semimonthly Production Dashboard report for each housing counseling agency that is shared with the respective agency.Production Dashboard summarizes historical information, such as how many initial packages have been accepted by servicers as complete, and the percentage of the agency’s cumulative goal that has been reached. The Production Dashboard also includes intermediate goals and projections, such as how many initial packages the agency must submit each month to reach its cumulative goal, and how many initial packages are projected to be delivered by the end of the performance period (based The on the agency’s average submission rate). Treasury stated that they periodically review the Production Dashboard for individual agencies, as needed. An agency that does not meet its production goals would receive less compensation because document preparation fees are only paid for complete initial packages accepted by servicers. In addition, NeighborWorks may re-allocate funds from an underperforming agency to another agency if it reaches its allocation goal. However, Treasury officials noted that the funding for 20,000 applications had not been fully allocated and, thus, NeighborWorks would first allocate unallocated funds to any agency needing a higher allocation before reducing the allocation of an underperforming agency. In addition to the Production Dashboard report, NeighborWorks provides Treasury with quarterly reports describing what housing counseling agencies have characterized as successes and challenges to project implementation. For example, in September 2013, NeighborWorks reported that 55 counseling agencies identified internal programmatic changes such as streamlining processes, specialized staff, and direct engagement with borrowers as factors associated with success. Counseling agencies also reported challenges with servicers that did not subscribe to the Hope LoanPort, unresponsive servicers, and borrowers that did not engage with counselors. Fully understanding and analyzing the nature of these successes and challenges could be useful to Treasury in working with NeighborWorks to improve the performance of the project. For example, Treasury stated to us that the majority of large MHA servicers subscribe to the Hope LoanPort, representing over 80 percent of HAMP activity. Treasury stated to us that servicers may have chosen not to subscribe to the Hope LoanPort due to the related subscription costs, and that the servicers that did not subscribe were generally either smaller servicers or those with their own document collection system. Additionally, Treasury noted that its compliance agent has begun assessing servicers’ processes associated with the MHA Outreach and Intake Project and has noted instances where certain servicers could enhance their design and execution of controls, but, the compliance agent’s loan level testing indicated that in most instances the loans were processed accurately and timely. By extending the project from December 2013 through September 2014, it is more likely that Treasury will reach its goal of 20,000 HAMP application packages completed through the project. However, it is not clear if the project is in fact increasing access to the program given the challenge of determining whether a borrower would have applied successfully in the absence of the project. Treasury requires MHA servicers to develop an internal control program to monitor compliance with fair lending laws. However, Treasury has not evaluated the extent to which servicers have effective internal control programs for assessing compliance with fair lending laws. Additionally, Treasury requires servicers to collect and report data on the race, ethnicity, and gender of MHA applicants, but has not analyzed the data for potential differences in outcomes of groups protected under the laws. Our analysis of HAMP loan-level data, which focused on four large MHA servicers, identified some statistically significant differences within these servicers’ portfolios for certain protected groups in denials and cancellations of trial modifications and in the potential for redefault of permanent modifications, which might indicate a need for further review. The MHA Servicer Participation Agreement and MHA Handbook require that servicers have an internal control program to monitor compliance with relevant consumer protection laws, including ECOA and the Fair Housing Act, and that the servicers review the effectiveness of their internal control program quarterly. The internal control program must document the control objectives for MHA activities, the associated control techniques, and mechanisms for testing and validating the controls. Servicers are also required to provide Treasury’s compliance agent with access to all internal control reviews related to MHA programs performed by the servicer and its independent auditing firm and to provide a copy of the reviews to the MHA program administrator. Although Treasury requires MHA servicers to certify that they have developed and implemented an internal control program to monitor compliance with applicable consumer protection and fair lending laws, Treasury has not monitored servicers to determine whether they have developed such internal control programs. Specifically, Treasury officials told us that it has not required its compliance agent to obtain information from servicers on such programs. The five MHA servicers we spoke with told us that they had not shared with Treasury details on their internal control programs for monitoring compliance with fair lending laws. However, four of the servicers said that they regularly shared the details of these programs, as well as the results of fair lending analyses, with their federal financial regulators. Treasury officials explained that Treasury does not examine servicer compliance with fair lending laws because other federal agencies— CFPB, DOJ, FTC, HUD, and the banking regulators—have the sole responsibility for enforcement and supervision of federal fair lending laws. Therefore, only those agencies, and not Treasury, are charged with the responsibility for determining whether a servicer (subject to the jurisdiction of the appropriate agency) complies with the federal fair lending laws. According to representatives of the prudential regulators, their fair lending reviews have a broader overall focus, which include examining the servicers’ overall servicing and loss mitigation activities. They added that, while the reviews may not specifically focus on MHA activities, HAMP modifications may be included in the loan portfolios of the MHA servicers examined. Officials from two prudential regulators said that their examinations of servicing portfolios had resulted in supervisory guidance to a few of the larger MHA servicers related to (1) potential disparities between certain fair lending protected classes and their comparison populations, (2) communication issues with non-English speaking borrowers, and (3) handling of loss mitigation and loan modification complaints. Additionally, one regulator, on behalf of the Financial Fraud Enforcement Task Force’s (FFETF) Non-Discrimination Working Group, conducted exploratory analysis to characterize outcomes According to of the HAMP program and identify fair lending risks.officials from this regulator, the aggregate results of the exploratory analysis were shared with Treasury and other members of the Non- Discrimination Working Group in January 2012, and no fair lending issues of note were identified. Additionally, officials said that this regulator also shared the supervisory guidance discussed above and summaries of its fair lending reviews, which included statistical analysis of MHA servicers under its jurisdiction, with the working group. Officials from the prudential regulators noted that they consider complaints from consumers alleging discriminatory practices in their examinations of regulated banking institutions. According to the prudential regulators, results of their fair lending examinations are considered confidential supervisory information and are sensitive and privileged. The regulators explained that because of the nature of the information, they would not have shared the details of examination results with Treasury. Further, these regulators told us that they had not identified fair lending violations related to the MHA program specifically. Treasury officials told us that while they have not specifically examined servicers’ controls for ensuring compliance with fair lending laws, the compliance agent did examine servicers’ internal controls related to other HAMP requirements, such as soliciting borrowers who are 60 days delinquent and performing “second look” loan reviews which focused on determining whether HAMP denials were appropriate. Additionally, Treasury officials noted that the processes servicers use to solicit borrowers and determine the eligibility and terms of a modification were highly structured due to MHA requirements. These processes limit servicer discretion with respect to implementing the MHA requirements, and as a result, outcomes in HAMP modifications are less likely to result in fair lending compliance issues, according to Treasury officials. Despite the structured nature of HAMP, we have previously found instances where servicers varied their application within HAMP guidelines. For example, in 2010 we reported that servicers have inconsistent practices for evaluating borrowers for imminent default because Treasury has not provided specific guidance on how to evaluate nonenterprise borrowers for imminent default. Additionally, Treasury does not require servicers to apply principal reduction in connection with modifications; instead servicers are required to establish written policies detailing when principal While these policies must treat similarly reduction will be offered.situated borrowers in a consistent manner, there may be variations across servicers in the use of principal reduction and in some cases servicers may reasonably refuse to reduce principal. Also, servicers, and their employees, may make errors in applying HAMP policies to modifications. For example, in 2010 we reported that 5 of the 10 servicers we contacted reported at least a 20 percent error rate for income calculations. We noted that without accurate income calculations, which are key in determining borrowers’ DTI, similarly situated borrowers applying for HAMP may be inequitably evaluated for the program and may be inappropriately deemed eligible or ineligible for HAMP. Treasury also assesses servicers on their income calculations and tracks the percentage of loans for which MHA-C’s income calculation differs from the servicer’s on a quarterly basis. In their July 2013 assessment results, Treasury noted an average income error rate of less than 2 percent, down from an average of about 7.5 percent in July 2011. Although the prudential regulators have not identified any fair lending violations of MHA servicers, they did share some fair lending-related concerns of some large MHA servicers with Treasury. Furthermore, opportunity for variations and errors within and across servicers can impact borrowers. By evaluating the extent to which servicers have developed and maintained internal controls to monitor compliance with fair lending laws, Treasury could gain additional assurance that servicers are implementing the MHA program in compliance with fair lending laws. data, including information on the servicer and the borrowers’ race, ethnicity, and gender, with the federal agencies with fair lending oversight and enforcement authority. Treasury also makes a more limited public file available to the general public that excludes, among other things, information identifying the servicer and personal identifying information about the borrower (name, address, etc.). On first releasing the public file containing loan-level data in January 2011, Treasury stated that it intended to engage one or more independent, third-party research firms to conduct a more detailed analysis of fair lending in MHA and that it would make the results of this analysis available to the public. In March 2013, Treasury entered into an interagency agreement with HUD to engage a third-party contractor to conduct a fair lending analysis of HAMP loan modifications. As of September 30, 2013, HUD secured a contractor to conduct the analysis. Our analysis of Treasury’s HAMP data through April 17, 2013, suggested that there may be some issues that warrant a closer look at servicers’ fair lending internal control systems by Treasury and the pertinent fair lending regulatory agency. We examined the rate of denial or cancellation of HAMP modifications, and the rate of redefault of permanent HAMP modifications experienced by selected population groups and compared them to the same rates for their comparison populations at various stages of the HAMP process. We primarily focused on the outcomes for certain protected groups under federal fair lending laws plus low-income groups and groups in neighborhoods that consisted primarily of minority populations (substantially minority); we refer to these groups collectively as “selected populations.” We used a multivariate econometric analysis to control for several observable characteristics of the borrower, servicer, loan, and property, allowing us to appropriately estimate the outcomes these populations experienced. In focusing our analysis on four large MHA servicers, we found some statistically significant differences in the outcomes experienced by our selected populations compared to their comparison population. For example, we found that for all four servicers non-Hispanic African-Americans had a statistically significant higher trial modification denial rate compared to non-Hispanic whites due to DTIs being less than 31 percent. When examining denials of trial modifications because borrowers had not provided complete information to the servicer, denial rates were significantly higher for Hispanics than for the comparison population of non-Hispanic whites for three of the four large servicers we analyzed. We also found that for all of the servicers we analyzed, non-Hispanic African-Americans had a statistically higher rate of redefault than non-Hispanic whites, regardless of whether or not the servicer applied capitalization, principal forbearance, or principal forgiveness to the loan modification, holding other key factors constant. For additional findings from our analysis, see appendix II. We are unable to determine from the available HAMP data whether the statistically significant differences between the selected populations and their comparison populations identified in our analysis were the result of servicer discretion, servicer errors in the application or interpretation of HAMP guidelines or servicing protocols, differences among servicers’ policies, or the unintended consequences of HAMP guidelines or program design. Additional analysis is needed to determine the reasons for the differences and the extent to which servicer implementation of HAMP guidelines could be a potential cause as well as other potential causes for the differences in outcomes. As noted in appendix II, there are some limitations of our analysis. For instance, we could not control for all potential factors that affect these outcomes due to the lack of certain data, such as the wealth of the borrowers and their knowledge of the loan modification process. Also, our analysis cannot account for some important factors, such as whether equivalent borrowers in these populations apply to HAMP at different rates or are more or less likely to receive assistance outside of HAMP. Further, race and ethnicity data were not available for 54 percent of borrowers in the early stage of the HAMP process and 43 percent of borrowers in the later stage of HAMP. Although we took appropriate steps to minimize the impact of missing data our results should be interpreted with caution. Despite the limitations noted above, statistical differences in outcomes among population groups might suggest potential fair lending concerns that merit further examination. Officials from fair lending regulatory agencies told us that results of econometric analyses of fair lending populations were one of multiple sources of information they review when examining fair lending compliance of banking institutions and servicers. They noted that the existence of a statistical disparity alone would not necessarily result in the finding of a fair lending violation but could be a reason to further investigate an institution. Such analyses could be useful to Treasury as the agency considers whether servicers participating in the HAMP program have sufficient internal controls to assess compliance with fair lending laws. Treasury has taken various actions to increase access to the program for borrowers whose primary language is not English, but only recently has begun to systematically assess access to the MHA program, for these borrowers. Federal agencies, including Treasury, are required by Executive Order 13166, issued in August 2000, to “examine the services it provides and develop and implement a system by which LEP persons can meaningfully access those services.” Under MHA, borrowers apply for and obtain mortgage modifications directly from their mortgage servicers. Although Treasury has not specified for servicers how they should meet the needs of LEP persons or assessed their efforts to do so, it has taken steps to provide information and support to LEP borrowers in connection with MHA through various sources and methods. For example, Treasury publishes a website with information about the MHA program (www.makinghomeaffordable.gov), which has a mirror Spanish website and critical content pages in Chinese, Vietnamese, Russian, Tagalog, and Korean. Treasury also has published advertisements and public service announcements in Spanish, and conducted outreach to Spanish-speaking media as well. Additionally, the MHA website, along with Treasury’s outreach materials, directs interested homeowners to the Homeowners HOPE™ Hotline, which provides over-the-telephone support to LEP borrowers related to MHA programs. As part of the MHA escalations process, Treasury also provides a toll-free call center—MHA Help—where, according to Treasury, LEP borrowers can receive more specialized assistance over the phone. Treasury has also translated the MHA application form and certain outreach materials in other languages such as Spanish, Chinese, Korean, Russian, Vietnamese, and Tagalog. However, it does not require that servicers use the translated materials. The executive order also directs federal agencies to “prepare a plan to improve access to its federally conducted programs and activities by eligible LEP persons.” The plans are to include the steps the agency will take to ensure that eligible LEP persons can meaningfully access the agency’s programs and activities. In February 2011, the Attorney General issued a memorandum to the heads of federal agencies that renewed the federal government’s commitment to language access obligations under the executive order and called on agencies to, among other things, review their programs and activities for language accessibility and submit an updated LEP plan within 6 months after the date of the memorandum.Treasury issued its last LEP plan in 2000 and, as such, it did not cover newer programs such as MHA that began in early 2009. As of November 2013, Treasury was working on finalizing an updated agency-wide LEP plan, which would address newer programs and activities, such as the MHA programs. The draft plan indicated that Treasury intended to publish the plan in the Federal Register and on the Treasury website for public comment. The draft plan included information related to Treasury’s assessment of the language needs for the MHA programs. In addition, it described Treasury’s current and planned steps to assist LEP borrowers in accessing the information and support provided by Treasury in connection with the MHA programs. Additionally, Treasury’s Office of Financial Stability (OFS) has developed draft guidelines to assist OFS staff in providing access to LEP persons in connection with the MHA activities described above. Department of Justice, Federal Coordination and Compliance Section, Civil Rights Division, Language Access Assessment and Planning Tool for Federally Conducted and Federally Assisted Programs (May 2011). policy on effective relationship management should include, which Treasury confirmed. Treasury officials also told us that they had not required their compliance agent to review servicers’ implementation of the requirement for effective relationship management for LEP borrowers. According to Treasury officials, concerns about LEP borrowers’ lack of access to MHA were only recently raised as an issue by consumer advocates in May 2013. In response to this feedback, Treasury conducted a survey of the LEP-related policies of the 17 largest MHA servicers to better understand how these servicers worked with LEP borrowers. All of the 17 servicers Treasury surveyed reported that they had staff that spoke Spanish, and 15 servicers indicated that they had contracted with a vendor for real-time translation. Representatives of four MHA servicers we contacted confirmed this practice. The remaining servicer told us that they contract with a vendor for all non-English customer communication related to modifications. Additionally, representatives of three servicers told us that their firms had electronic systems that could note in the borrower’s file if the borrower’s primary language was Spanish. However, the systems contain no similar notation for other languages. Three servicers we spoke with told us that they also referred borrowers to the Homeowners HOPE™ Hotline to find a housing counseling agency that could assist with languages. Nonetheless, representatives of some advocacy groups we spoke to raised concerns about the sufficiency of the practices followed by servicers in meeting the needs of non-English-speaking borrowers. The advocacy groups represented housing counseling agencies whose counselors worked one-on-one with potential HAMP borrowers and legal services attorneys. These groups were concerned that servicers’ current practices of using Spanish-speaking staff or contracting with a language interpretation service were insufficient. For example, one advocacy group said that some servicers used Spanish-speaking customer service agents who might be knowledgeable about banking and mortgages generally, but not about servicing, loss mitigation, or HAMP specifically. Similarly, representatives of three advocacy groups noted that staff from a language interpretation service might not be familiar enough with banking terminology or the details of HAMP to provide quality interpretation services. Another group pointed out the importance of translated documents and noted that it would be more beneficial for borrowers to have important documents, such as the trial modification offer letter, translated into their preferred language so that they could refer to it when needed. In fact, in a 2013 national survey conducted by the National Housing Resource Center and a similar survey conducted by a California- based research group, nearly half of the 296 housing counselors who responded said their LEP clients who were seeking mortgage servicing assistance “never” received translated foreclosure-related documents. Additionally, in both surveys, over 60 percent of the housing counselors said that their LEP clients were “never” or only “sometimes” able to speak to their servicer in their native language or through a translator provided by the servicer, while the rest said their clients were “always” or “often” able to do so. Furthermore, in the national survey, nearly half of the survey respondents said their LEP clients “always,” “often,” or “sometimes” received worse loss mitigation outcomes than their English- proficient clients, while the other half said their clients “never” received worse outcomes. Ultimately, the lack of clear guidance and expectations of servicers on what constitutes effective relationship management in serving LEP borrowers can potentially affect the effectiveness of servicers’ ability to work with such borrowers and result in unequal access to the program by these borrowers. Treasury officials noted that MHA-eligible loans represent a small portion of participating servicers’ overall servicing activity, and thus Treasury is cautious in imposing additional requirements on participating servicers that could lead to excessive costs and burdens. They added that participating servicers interact with borrowers from a number of communities that speak a variety of languages and are in a better position to ascertain how to best provide effective relationship management to the LEP borrowers they serve. According to Treasury, servicers have told Treasury that mandating the translation or use of certain documents, among other things, would be of little benefit given the overall low demand for such documents in languages other than English, the added legal risks, the potential for inaccurate translation, and increased costs associated with the translation of documents. Treasury also noted that it may not be appropriate to require servicers to conduct business in languages other than English, especially when other regulators have not done so. For example, they noted that CFPB’s recent mortgage servicing rules do not require servicers to accept applications in other languages or provide specific translation services. Treasury officials stated that the issues faced by LEP borrowers extend beyond HAMP to the broader areas of loss mitigation and mortgage origination. Accordingly, Treasury believes that it is appropriate for such industry-wide issues to be addressed by those government entities that have broad jurisdiction over the financial institutions operating in these fields. However, the MHA program provides direct outlays of taxpayer dollars to servicers and is intended to provide benefits to eligible borrowers. As such, it is important that Treasury take appropriate steps to help ensure that all eligible borrowers, including those whose primary language is not English, have access to the MHA program benefits. Without guidance on effective relationship management for LEP borrowers, the policies that MHA servicers develop may vary, and LEP borrowers may be treated differently across servicers, depending on which company services their loan. Additionally, because Treasury has not provided guidance to servicers describing the essentials of a relationship management policy for LEP borrowers, Treasury is limited in what it can measure when assessing servicers’ compliance with Treasury’s requirement or the effectiveness of their current practices for interacting with LEP borrowers. Ultimately, the lack of LEP policies and procedures for the MHA programs and clear expectations for effective relationship management make it less likely that servicers may effectively meet borrowers’ needs for language services and therefore limit their opportunity to benefit from MHA. While below initial expectations, over a million borrowers have had their mortgages modified under the program. However, with respect to MHA servicer compliance, Treasury could be taking additional steps to ensure that borrowers are being treated in accordance with fair lending laws. MHA servicers are required to develop an internal control program to monitor compliance with fair lending laws that prohibit discrimination. However, Treasury has not examined servicers’ internal control programs or conducted any analysis of borrowers’ outcomes in HAMP modifications to identify potential fair lending risks. Our analysis found some statistically significant differences in the outcomes of fair lending populations, when compared to others, and while these variations alone do not indicate that borrowers were treated differently, they suggest that further examination may be warranted. Conducting further analyses would permit Treasury to better identify where it might apply examination resources, such as those of its compliance agent, and ascertain whether these differences are due to servicers’ discretion in the application of HAMP guidelines or other factors. By requiring its compliance agent to review the fair lending internal controls of loan servicers, or reviewing the data MHA servicers collect on the race, ethnicity, and gender of borrowers, Treasury could gain additional assurance that servicers are implementing the MHA program in compliance with fair lending laws as the servicers contracted to do. Finally, despite an executive order issued in 2000 to improve access to federal programs for people with limited English proficiency and a 2011 memorandum by the Attorney General renewing the federal government’s commitment to that executive order, Treasury officials have only recently developed a written plan that covers the Making Home Affordable programs as of November 2013. While Treasury does take certain measures to raise awareness and outreach to LEP borrowers, it does not provide any clarifying guidance to servicers on its requirement to have a relationship management policy for their LEP borrowers. According to a Treasury survey of MHA servicers and our discussions with five large MHA servicers, these servicers had some processes in place to assist LEP borrowers, such as using an oral translation service. Housing counselors and housing advocacy groups that work with LEP borrowers have questioned the ability of servicers to assist LEP borrowers. Without additional guidance on providing meaningful language assistance, LEP borrowers may be treated differently across servicers and have unequal access to the MHA program. Moreover, Treasury has not assessed the effectiveness of its own or its servicers’ LEP practices. Further, without more specific guidance on what it expects of servicers in ensuring LEP access, Treasury and its compliance agent are limited in their ability to assess servicers’ compliance with those requirements. As part of Treasury’s efforts to continue improving the transparency and accountability of MHA, we recommend that the Secretary of the Treasury take actions to require that its compliance agent take steps to assess the extent to which servicers have established internal control programs that effectively monitor compliance with fair lending laws that apply to MHA programs; issue clarifying guidance to servicers on providing effective relationship management to limited English proficiency borrowers; and ensure that the compliance agent assess servicers’ compliance with LEP relationship management guidance, once established. We provided a draft of this report to CFPB, DOJ, FDIC, Federal Reserve, HUD, OCC, and Treasury for review and comment. We received a written comment letter from Treasury, which is presented in appendix III. We also received technical comments from CFPB, DOJ, Federal Reserve, HUD, and Treasury that are incorporated as appropriate in the report. FDIC and OCC did not provide any comments on the draft report. In its comment letter, Treasury noted that it was still considering our findings and recommendations, and agreed that it should continue to strengthen its program in order to help as many homeowners as possible avoid foreclosure. Treasury also noted that since MHA’s launch in 2009, more than 1.9 million homeowner assistance actions had taken place under the program and that they continue to take action to maximize participation rates. In response to our recommendation that it take action to require that its compliance agent begin assessing the extent to which servicers had established internal control programs that effectively monitor compliance with fair lending laws, Treasury said that it remained committed to working to ensure that homeowners are treated fairly by servicers participating in MHA. Treasury stated that it had a robust compliance program to assess servicers’ performance and that it published the results of its assessments to provide greater transparency and hold servicers accountable. However, as noted earlier, Treasury does not require its compliance agent to assess servicers’ internal control programs for monitoring fair lending compliance. Treasury stated that it planned to continue to explore ways to promote fair lending policies, including through coordination with fair lending supervisions and enforcement agencies and improving access to data. We agree that continuing to improve the transparency and accountability of MHA is important. As part of this effort, it will be important that Treasury require its compliance agent to assess the extent to which servicers have established internal control programs that effectively monitor compliance with fair lending laws that apply to MHA programs. Treasury also provided comments related to our recommendations that Treasury issue clarifying guidance to servicers on providing effective relationship management to limited English proficiency borrowers and ensure that its compliance agent assess servicers’ compliance with this guidance. Treasury noted that it recognized the challenges homeowners with limited English proficiency faced, and had made some program materials available in other languages and sponsored call centers that offer translation services. Treasury added that the challenges faced by these homeowners extend beyond MHA to industry-wide areas of loan servicing and mortgage lending. Treasury stated that it would continue to explore additional ways to assist LEP homeowners and work with federal regulators that have broad jurisdiction over these issues. While these challenges likely extend beyond the MHA program, the MHA program provides direct outlays of billions of taxpayer dollars in incentive payments to participating servicers and is intended to provide benefits to all eligible borrowers needing assistance to avoid foreclosure. Taking appropriate steps to help ensure that LEP borrowers have access to the MHA program benefits would place this federal program in the forefront of efforts to reach these borrowers and ensure that taxpayer dollars are put to the most effective use. In its technical comments, Treasury indicated that it disagreed with three statements in the draft report. Specifically, Treasury disagreed with our characterization of participation levels in the HAMP first-lien modification program as declining despite Treasury’s efforts to increase participation. We modified the text to clarify that since the implementation of HAMP Tier 2 in June 2012, the number of modifications started each month has been relatively steady through November 2013. Treasury also questioned the accuracy of our statement that it lacked assurance that the MHA program, and servicers’ implementation of it, were treating all borrowers fairly and consistently citing, among other things, the role of the prudential regulators in enforcing fair lending laws and its compliance program for assessing the performance of participating servicers. However, these mechanisms only provide limited assurance since, as noted previously in the report, the prudential regulators do not focus their fair lending reviews on MHA program activity and Treasury’s compliance program does not look at the fair lending controls of participating servicers. As a result, we continue to believe that it is important that Treasury require its compliance agent to assess the internal control programs that servicers are required to put into place to monitor compliance with fair lending laws that apply to MHA programs. Lastly, Treasury noted in its technical comments that it disagreed with the statement that it has only recently begun to systematically assess and take measures to ensure access to the program for borrowers whose primary language is not English. We clarified the text to acknowledge the actions taken to raise awareness and outreach to LEP borrowers, but that Treasury has not provided guidance to servicers on its requirement to have a relationship management policy for their LEP borrowers or assessed the effectiveness of its own or its servicers’ LEP practices. We are sending copies of this report to the appropriate congressional committees. This report will also be available at no charge on our website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. The objectives in this report were to examine (1) the status of Making Home Affordable (MHA) and steps Treasury is taking to increase participation in the program, (2) Treasury’s oversight of the MHA-related fair lending internal controls of participating servicers, and (3) Treasury’s and MHA servicers’ policies and practices for ensuring that borrowers with limited English proficiency (LEP) have equal access to the program. To examine the status of the MHA programs, we reviewed and analyzed Treasury’s Monthly MHA Performance Reports and MHA program and expense information in the quarterly reports to Congress issued by the Special Inspector General for the Troubled Asset Relief Program (SIGTARP). We also reviewed the Congressional Budget Office’s (CBO) Report on the Troubled Asset Relief Program (TARP) and spoke to CBO officials about their costs estimates for the MHA program. We also spoke with Treasury officials to obtain their views on future MHA expenditures. To understand steps Treasury has taken to increase program participation, we reviewed Treasury’s Supplemental Directive and spoke with Treasury officials about their MHA Outreach and Borrower Intake Project. We also spoke to and reviewed documentation from NeighborWorks America about their involvement in the project. To examine Treasury’s oversight of the MHA-related fair lending internal controls of participating servicers, we reviewed MHA program documentation, such as the Servicer Participation Agreement, MHA Handbook, and associated Supplemental Directives, to understand servicers’ fair lending obligations. We spoke with officials at Treasury to gather information on their oversight of MHA servicers’ practices. We also spoke with other federal agencies with fair lending oversight authority to gather information on the results of their fair lending oversight of MHA servicers. Specifically, we spoke with officials from the Department of Housing and Urban Development (HUD), Department of Justice (DOJ), Bureau of Consumer Financial Protection (CFPB), and three depository institution prudential regulators (the Federal Deposit Insurance Corporation (FDIC), the Board of Governors of the Federal Reserve System (Federal Reserve), and the Office of the Comptroller of the Currency (OCC)). Further, we spoke with staff at the five largest MHA servicers, in terms of HAMP trial modifications approved, about their internal control programs and compliance with fair lending laws. The five servicers we selected—Bank of America, CitiMortgage, JPMorgan Chase Bank, Ocwen Loan Servicing, and Wells Fargo Bank—collectively represented about 77 percent of the total HAMP trial loan modifications approved, as of October 2013. In order to determine if any potential disparities exist in the outcomes of borrowers in protected classes and other groups, we compared their outcomes to that experienced by other borrowers, for four large servicers. We obtained and analyzed Treasury’s HAMP data in its system of record, Investor Reporting/2 (IR/2), through April 17, 2013. For additional information on the data reliability and methodology for this analysis, see appendix II. We determined that the IR/2 data were sufficiently reliable for the purposes of our analysis. To understand how Treasury and MHA servicers ensure access to MHA for LEP borrowers, we examined Treasury’s 2000 LEP plan and its updated LEP plan and MHA guidelines, which are still in draft form. We reviewed MHA program documentation to understand servicers’ obligations regarding LEP borrowers and spoke with Treasury officials about their review of servicers’ LEP policies and practices. We reviewed a recent survey Treasury conducted of 17 servicers to understand how servicers work with LEP borrowers. Further, we spoke with the 5 servicers we contacted about their current LEP policies and practices and Treasury’s oversight of servicers’ policies. Additionally, we spoke with various mortgage industry participants, such as associations representing housing counselors, including those who directly work with LEP borrowers, and legal services attorneys. We also reviewed a national survey conducted by the National Housing Resource Center and a similar survey conducted by the California Reinvestment Coalition about servicer compliance with the new servicing standards resulting from a settlement involving 5 of the largest mortgage servicers and the federal and most The national survey collected responses from 212 state governments.housing counselors representing 28 states and the District of Columbia and the California survey received responses from 84 counselors and legal service advocates.methodology used and determined it was reliable for the purposes of reporting housing counselors’ views on the experiences of individuals they work with. We collected information about the survey We conducted this performance audit from February 2013 through February 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on the audit objectives. This appendix provides a summary of our econometric analysis of the Home Affordable Modification Program (HAMP) outcomes for selected population groups—primarily focusing on the outcomes for protected groups under federal fair lending laws plus low-income groups and groups in substantially minority neighborhoods—at key stages of the HAMP process. The modification process under HAMP is highly structured, based on the standard waterfall in the HAMP guidelines, so servicers’ compliance with the required terms of the program may not necessarily warrant further investigation of differences in HAMP outcomes across certain groups. However, servicers have some discretion over certain actions and outcomes in the process, such as calculating borrowers’ income, determining the sufficiency of borrowers’ documentation and whether a borrower is in imminent danger of default, and developing their own policies to determine whether to offer principal forgiveness as part of the modification.requirements. Further, even if standards were applied uniformly, it is In addition, servicers may incorrectly apply program possible that the application of HAMP and servicer-specific guidelines could be resulting in unintended adverse consequences for some population groups. We are unable to determine from the available HAMP data whether the statistically significant differences between the selected populations and their comparison populations identified in our analysis were the result of servicer discretion, servicer errors in the application or interpretation of HAMP guidelines or servicing protocols, differences among servicers’ policies, or the unintended consequences of HAMP guidelines or program design. Also, our analysis cannot account for some important factors, such as whether equivalent borrowers in these populations apply to HAMP at different rates or are more or less likely to receive assistance outside of HAMP. To the extent possible, we have controlled for several characteristics of the borrower and loan, as well as other factors which could confound potential differences in the outcomes experienced by the fair lending and other selected populations and their comparison populations in our analysis. Nonetheless, further investigation would be warranted to identify the source of the statistically significant differences identified by our analysis and what action, if any, would be appropriate to consider. Borrowers applying for loan modifications under HAMP go through a selection process, according to the HAMP guidance. Key parts of the process include several steps at three key stages. Pretrial or application stage: At the application stage the servicer determines if the borrower is eligible for HAMP, including the requirement that the borrower must be either 60 days or more past due on mortgage payments prior to the modification, or in imminent danger of default. The borrower’s application may be denied because the application was ineligible or for reasons not related to eligibility, such as an incomplete request or if the modification would require excessive forbearance. Also, the borrower may decide not to accept an approved offer from the servicer. Trial modification stage: Once the borrower accepts the offer, they must make three timely monthly payments of the modified loan or the trial modification may be cancelled for nonpayment. Prior to June 2010, a borrower could begin a trial based on stated information with data verification as a condition for conversion to permanent modification. For these borrowers, the trial may also be cancelled because the loan was subsequently determined to be ineligible. Trials may also be cancelled for reasons not related to eligibility such as an incomplete request and a negative net present value (NPV) result if the loan were to be modified. The modification becomes permanent if the borrower successfully completes the trial modification. Permanent modification stage: A permanent modification is cancelled if a borrower is unable to sustain the modification by redefaulting (i.e., the loan becoming 90 days or more delinquent). If the borrower is eligible for HAMP, the servicer evaluates the loan using a standardized NPV test, which compares the net present value of cash flows with and without the modification. The HAMP Tier 1 modification must reduce the borrower’s first-lien mortgage payment to as close to 31 percent housing debt-to-income (DTI) ratio using a sequence of steps— the so-called standard modification waterfall. The steps include capitalizing accrued interest, reducing the interest rate on the loan, and extending the term or amortization period of the loan. Principal forbearance could be used as needed and principal forgiveness could be offered at any stage of the modification process. The main data source for the study is Treasury’s HAMP data in its system of record, Investor Reporting/2 (IR/2) made available to government agencies. The data used are restricted to first-lien loans in the 50 states of the United States and in the District of Columbia. We excluded loans owned or guaranteed by the Federal Housing Administration (FHA) or Veterans Affairs (VA). The time period analyzed is for HAMP applications and modifications from April 2009 through April 17, 2013. The HAMP data were supplemented with housing- and mortgage-related data from the 2010 Census from the Census Bureau: these data provided neighborhood-level characteristics such as the poverty rate, household education, mortgages with second liens, and ratio of rental values to home values (property risk) that could be associated with HAMP outcomes. The HAMP data include a variety of information on individual borrowers and other characteristics about the loan, property, investor, servicer, and loan modification terms, and the current status of the modification. Some of the data are specific to conditions before or after the modification, as well as at the loan origination. The data are generally available for the fair lending and our other selected populations and their comparison populations; however, data on borrower income are not available in the early stage of the HAMP process for borrowers whose applications were denied. In general, borrowers whose HAMP application did not advance to the NPV evaluation stage have more missing data because the information used for the NPV evaluation was not recorded in the database. the program. Servicers are required to report data when borrowers request a modification, during the trial period, when the trial is converted to a permanent modification, and to report the monthly performance of the permanent modification. The data used for the analysis consist of 4.7 million loans, representing 92 percent of the HAMP applications as of April 17, 2013. See table 2 for details of the HAMP data used in our analysis, by phase of the HAMP process. About 11 percent of the loans not approved/not accepted contain NPV data. Pretrial or application stage Completed applications Trial modification stage Successful trial modifications within 6 months Unavailable outcome Permanent modification stage Ongoing or paid-off permanent modifications within 12 months Paid-off loans comprised about 1 percent of the loans. For every loan, the data included a descriptor of whether or not the loan completed the respective stage of the HAMP process, and, if it had not, the reason for the loan not reaching the next stage. We separated the denials of applications and cancellations of trial modifications due to reasons we determined to be related to the HAMP eligibility guidelines from the other reasons to provide additional insight into differences in outcomes for the groups analyzed. In the application stage, the top three reasons for denial of applications for modification that were related to eligibility guidelines, were (1) borrowers whose current DTI was less than 31 percent, (2) ineligible mortgage, and (3) borrower was found to not be in imminent danger of default. The three top reasons for denial of applications for modification not related to eligibility guidelines were (1) incomplete request, (2) approved offer not accepted by borrower or withdrawn request, and (3) loan modification that would require excessive forbearance. In the trial modification stage, the three top reasons modifications were cancelled due to reasons we determined were related to eligibility guidelines were (1) borrowers whose current DTI was less than 31 percent, (2) ineligible mortgage, and (3) the property was not owner-occupied. The top reasons for trial modification cancellations not related to eligibility or payment default were (1) incomplete request, (2) approved offer not accepted by borrower or withdrawn request, (3) loans with negative NPV. We also analyzed trial modification cancellations due to payment default (loans that became 30 days or more delinquent). Lastly, in the permanent modification stage, there was only one reason that modifications could be cancelled—redefault (loans that were 90 days or more delinquent). Using information on where a property is located, we also include variables to control for potential differences in state laws, regulations, and programs that could affect the cost of foreclosure and treatment of delinquent borrowers. We also constructed several mortgage- and housing-related variables of the neighborhoods for the loans in our analysis using data from the 2010 Census. Based on the census tract in which the property is located, we associated various variables to the HAMP loan-level data. The location-specific variables include characteristics such as minority concentration, poverty, age, foreign-born concentration, college education of household, and property risk characteristics. We examined the availability of HAMP data for the selected population groups and their comparison populations for our analysis. The data show that missing data for race/ethnicity reduced from 54 percent in the early stage of the HAMP process when fewer data were reported by servicers to 43 percent in the later stage. Although the race/ethnicity of some borrowers was unavailable, the data suggest that the properties of borrowers with unavailable race/ethnicity were disproportionately located in areas where racial minorities were predominant, particularly in the early stage of the HAMP process. The proportions of missing data were generally much lower for gender, income, and minority composition of the areas and their comparison populations. While previous studies that used the HAMP data acknowledge the limitations of missing data on the fair lending populations and their comparison populations, none of the studies indicated that the available data are not suitable for fair lending analysis. Nonetheless, we took several steps to reduce the potential bias of missing data of the selected populations on our analysis. First, we included observations representing the missing data as another category, where possible, particularly since previous studies have indicated that missing race/ethnicity data are not likely to be purely random.as part of our robustness checks, we also conducted the analysis excluding the observations representing the missing data to learn about its potential impact on our results. The results were similar for our key findings. And, third, we restricted the data to the period since December 2009, when servicers were required to report fair lending-related data, and the proportions of missing data for the fair lending populations and their comparison populations decreased significantly. All the variables used are in categorical format (i.e., each variable is divided into sub-groups), except the modification types (capitalization, principal forbearance, and principal forgiveness), which are measured by the percent change of the loan balances. There are fewer missing data of the variables in the later stage of HAMP since the data availability generally improved as the borrower moved through the modification process. Using categorical format helps to avoid the exclusion of variables with missing observations. and time period analyzed.for all the HAMP outcomes, rather than tabular analysis, which allowed us to control for several potential confounding factors, including credit risk- related factors, for which loan, borrower, property, and neighborhood characteristics serve as proxies. In particular, we included several variables that are used to capture the creditworthiness of borrowers in mortgage markets, such as delinquency status of borrowers before the modification, FICO credit scores at modification, the debt-to-income (DTI) ratios for both the front-end before modification and back-end after modification, and the LTV of the property at origination and at the time of the modification. We also used income cohorts for borrowers relative to incomes in their geographic areas (metropolitan statistical areas—MSAs) instead of nationally. Also, we measured default and redefault by the age of the loan since modification, which is important since default rates generally vary over time. Some of these are important differences between our study and Mayer and Piven, which is the closest of all the previous studies to ours in terms of the data used and issues that were addressed. We used a multivariate regression technique Mayer and Piven argued that, overall, the fair lending populations did not experience differential outcomes compared to their comparison populations. Thus, race, ethnicity, gender, or income has “very little” impact on borrowers’ successful participation in HAMP as well as benefits of the program at every key stage of the program. Their results, nonetheless, suggest that the authors found some disparities for certain groups. For instance, non-Hispanic African-Americans, compared to whites, were more likely to redefault; Hispanics, compared to non- Hispanics, were more likely to have their trial or permanent modifications cancelled; women were at least as successful as men with respect to HAMP outcomes analyzed; and low-income borrowers were less likely to redefault on their permanent modifications compared to higher income borrowers. Moreover, their study assessed overall HAMP program outcome results and did not analyze potential outcome differences and actions of individual servicers because the data set used for the analysis—the HAMP general public data file—did not contain variables that could be used to identify the servicer of the loan. A study by the California Reinvestment Coalition of HAMP trial modifications in four MSAs in California found racial and ethnic disparities in the experiences of borrowers, which they argued was supported by their survey of housing counselors. The analysis involved tabulation rather than multivariate regression analysis and did not consider the effects of servicers due to the same limitation that the Mayer and Piven study faced with the lack of servicer identifying information in the data set used for the analysis. The National Community Reinvestment Coalition conducted a study of distressed homeowners who sought assistance from NCRC’s Housing Counseling Network. The data were collected over a 2-month period in 2010 from 29 organizations and 179 borrowers. The 179 respondents included both HAMP-eligible and noneligible borrowers. The findings related to fair lending included the following—servicers foreclosed on delinquent non-Hispanic African-American borrowers more quickly than on their counterpart whites or Hispanic borrowers, and HAMP-eligible white borrowers were almost 50 percent more likely to receive a modification than their non-Hispanic African-American counterparts. The study acknowledged the limitation that it did not use a nationally representative sample of distressed homeowners. Furthermore, similar to the CRC study, the analysis used tabulation rather than multivariate regression. Voicu et al. analyzed redefault rates using data for New York City for HAMP and proprietary (non-HAMP) loan modifications from January 2008 to November 2010. While they found that borrowers who received HAMP modifications were less likely to redefault compared to those that received proprietary modifications, the borrower’s race or ethnicity was not significantly correlated with the odds of redefault. The analysis covered a limited geographic market and did not include outcomes in the early stage of HAMP. Based on economic reasoning, data availability, and previous studies on loan modifications, we used a relatively flexible specification to estimate the outcome of a loan at certain stages of the HAMP process. The general regression specification for the models is: 𝑦 =X𝛽 + Zδ + ε. y is the HAMP outcome measure being assessed, such as whether a loan was eligible for trial modification compared to being denied due to DTI less than 31 percent, ineligible mortgage, or not in imminent default—a multinomial outcome; it could also be whether a loan remained current or redefaulted within 12 months of the permanent modification—a binomial outcome; X represents the fair lending and other selected populations—the income-related variables could not be used in the equations for the early stage of the HAMP process due to lack of data; Z represents a series of control variables, including other borrower characteristics, the loan, property, neighborhood, modification terms, geographic and time effects, as well as investor/lender and servicer effects; β and δ are the parameters to be estimated; and ε represents an error term. We estimated the regression models using the logistic technique for pooled data for all the servicers.based on the distribution of the outcomes at a stage of the HAMP process because the sample used for the regression, especially in the early stage of HAMP, differed from the full data due to missing observations for certain key variables, including the fair lending and other selected We used probability weights that were populations and their comparison populations.statistically significant, and most of the control variables are also significant at the 5 percent level or better and their effects (the direction of their impacts) are generally consistent with our expectations. We present below the results of the HAMP outcomes we analyzed for four of the large servicers with significant HAMP activity, where there are statistically significant differences between the fair lending and other selected populations and their comparison populations. Although our results show adverse as well as favorable outcomes for the selected populations compared to their comparison populations, we focus on cases where the outcomes were unfavorable to the selected populations because they are underrepresented in housing and mortgage markets. This approach is generally consistent with the focus of fair lending analysis on adverse outcomes for protected groups. Also, we focus below on the effects where the predicted probability of an outcome for all borrowers is 10 percent or more. Although this threshold has no strong statistical, economic, or legal justification, it helps us to focus on the more important findings, and is therefore appropriate for the diagnostic purpose of our study. The complete estimated probabilities are presented in tables 3 through 5. We compared borrowers who are ineligible for trial modification—due to their debt-to-income (DTI) ratios being less than 31 percent, their mortgages were ineligible, or they were found to not be in imminent danger of default—to those eligible for trial modification. The estimates are based on a multinomial logistic regression of denial of application for these three reasons using pooled data for all servicers. The main results, from table 3, are: Overall, the denial rate of borrowers because their DTI was determined to be less than 31 percent, was about 11 percent. We found statistically significant differences in the denial rates of trial modification between fair lending populations and their comparison populations due to the servicer’s determination that the borrower’s DTI was less than 31 percent. The difference in denial rates between the non-Hispanic African- American borrowers and their comparison group of non-Hispanic whites was at least 13 percent higher for all four large servicers. The difference in denial rates between non-Hispanic American Indians, Alaska Natives, Native Hawaiians, and Other Pacific Islanders (collectively referred to as AIPI in this appendix) borrowers and their comparison group of non-Hispanic whites was at least 11 percent higher, for two of the large servicers we analyzed. Borrowers with unavailable information on their race/ethnicity (these borrowers have properties in substantially minority areas) or gender had at least 22 and 15 percent higher denial rates, respectively, than their comparison populations for all four large servicers. Borrowers in substantially minority areas had at least 3 percent higher denial rates than their comparison populations of borrowers in nonsubstantially minority areas for all four large servicers. On the other hand, non-Hispanic Asians, Hispanics, and females had generally lower denial rates than their comparison populations. Although we found some disparities between the selected populations and their comparison populations for denials due to servicers’ determination that borrowers had ineligible mortgages or were not in imminent danger of default, the results are not discussed since the overall denial rates are small. A study, J. Karikari, “Why Homeowners’ documentation went missing under the Home Affordable Mortgage Program (HAMP)?: An analysis of strategic behavior of homeowners and servicers,” Journal of Housing Economics, vol. 22 (2013): 146-162, found that actions by both servicers and homeowners are consistent with missing documentation. Servicers have an incentive to “lose” the documentation of borrowers with low credit risks in order to “steer” them away from HAMP to their own (proprietary) less costly modification programs. At the same time, borrowers with high risks have less incentives or are unable to provide complete documentation to support the reason for their ‘‘hardships,’’ as well as having difficulty in fulfilling the HAMP requirements. borrowers’ properties were located disproportionately in substantially minority areas. Also the difference in denial rates for borrowers with unavailable information on their gender was at least 24 percent higher for two of the three large servicers we analyzed. Borrowers in substantially minority areas had at about 1 percent higher denial rates than their comparison populations of borrowers in nonsubstantially minority areas for two of the three large servicers we analyzed. On the other hand, non-Hispanic AIPI and females were less likely to be denied than their comparison populations. Although we found some disparities between the fair lending and other selected populations and their comparison populations for denials due to borrowers not accepting their approved offers or for excessive forbearance, the results are not discussed since the overall denial rates are small. We compared borrowers whose trial modification was cancelled because their DTI was less than 31 percent, mortgage was ineligible, or property was not owner-occupied to those borrowers that were eligible for permanent modification. The estimates are based on a multinomial logistic regression of cancellation of trial modification for these three reasons using pooled data for all servicers. Overall, the cancellation rate for borrowers with a DTI less than 31 percent was about 0.8 percent. The cancellation rate was about 0.3 percent for borrowers with ineligible mortgages and 0.3 percent for those with non owner-occupied properties. Although we found some differences between the fair lending and other selected populations and their comparison populations, the results are not discussed since the overall cancellation rates were small. We compared borrowers whose trial modification were cancelled because the servicer determined their request was incomplete, the borrower did not accept the offer they received or withdrew their requests for trial modification, or the servicer determined that modifying the mortgage would result in a negative NPV. The estimates are based on a multinomial logistic regression of cancellation of trial modification for these three reasons using pooled data for all servicers. Overall, the cancellation rate for borrowers with incomplete requests was about 4 percent. The rate was about 3 percent for borrowers who did not accept their approved offer or withdrew their request and 0.3 percent for loans that would have a negative NPV if modified. Although we found some differences between the fair lending and other selected populations and their comparison populations, the results are not discussed since the overall cancellation rates were small. We compared borrowers whose trial modification was cancelled for payment default (i.e., their loans became 30 days or more delinquent) within 6 months of the modification to those borrowers who were approved for permanent modification. Overall, the estimated default rate of trial modifications was 4 percent. Although we found differences between the fair lending and other selected populations and their comparison populations, the results are not discussed since the overall cancellation rate was small. We compared borrowers who received permanent modification but redefaulted (i.e., their loans became 90 days or more delinquent) within 12 months of the modification to those borrowers who remained current on their payments (or paid off the loan). The estimates are based on a binary logistic regression of redefault using pooled data for all the servicers, and including the three modification types—capitalization, principal forgiveness, and principal forbearance—as covariates. The main results, from table 5, are: Overall, the estimated redefault rate of permanent modifications was 11 percent. The redefault rates differ by the modification type—capitalization, principal forgiveness, or principal forbearance. The use of principal forgiveness results in lower redefault rates compared to capitalization or principal forbearance. Capitalization increases the redefault rate by 3 percentage points while it decreases by 3 percentage points with principal forgiveness; principal forbearance lowers the redefault rate by 1 percentage point. We found disparities in redefault rates between certain fair lending populations and their comparison populations. The difference in redefault rates between non-Hispanic African- Americans and non-Hispanic whites whose loans were serviced by any of the four large servicers was about 14 percent higher, irrespective of the modification type. The difference in redefault rates between non-Hispanic AIPI and non-Hispanic whites whose loans are serviced by any of three large servicers we analyzed was about 7 percent higher, irrespective of the modification type. The differences in redefault rates between borrowers with unavailable information on their race and income were about 5 and 18 percent higher, respectively, compared to their comparison populations, irrespective of the modification type. However, non-Hispanic Asians, Hispanics, borrowers in substantially minority areas, and borrowers with low, moderate incomes, and middle-income, were less likely to redefault compared to their comparison populations. While the results Furthermore, we included area incomes by groups.were consistent with the key findings reported in tables 3 to 5, we also found that households in low-moderate income areas were more likely to be denied for DTI less than 31 percent and those in middle-income areas were more likely to be denied for incomplete request compared to their comparison populations of households in high income areas. Also households in areas with unavailable median incomes were more likely to re-default on their permanent modifications compared to their comparison population. We took several steps to check these regression models for robustness, particularly for the key findings on denials of HAMP applications due to a servicer’s determination that the borrower’s DTI was less than 31 percent, cancellation of trial modification due to the servicer determining the request was incomplete, and for redefaults of permanent modification. Specifically, we did the following: estimated the outcomes separately for each of the four large estimated the models excluding the observations for the missing fair estimated the model comparing Hispanics to non-Hispanics (the majority comparison group); restricted the data to the period since December 2009 when servicers were required to collect fair lending related data; for the estimation of redefault rates of permanent modifications we used loans that had aged 24 months since the permanent modification; and estimated the models without probability weights. These checks were consistent with the reported key findings of differences between the fair lending and other selected populations and their comparison populations. Nonetheless, there are limitations of this study, including limited data on the selected populations, the lack of certain variables that could help capture the credit risks of borrowers and the loans such as the wealth of the borrowers and their knowledge of the loan modification process, and, in particular, if borrowers have accessed housing counseling services. Also, as noted, this analysis concerns potential disparate outcomes for some populations and does not mean to imply disparate treatment by some servicers or that borrowers experienced disparate impact in violation of fair lending laws, so the findings in this study should be interpreted cautiously and further analysis may be appropriate. In addition to the contact named above, Harry Medina (Assistant Director), Bethany Benitez, Emily Chalmers, William R. Chatlos, Lynda Downing, John Karikari, Anar Ladhani, John Lord, Thomas J. McCool, Susan Offutt, Jena Sinkfield, Anne Y. Sit-Williams, Jim Vitarello, and Heneng Yu made key contributions to this report. Agarwal, S., G. Amromin, I. Ben-David, S. Chomsisengphet, T. Piskorski, A. Seru. “Policy Intervention in Debt Renegotiation: Evidence from the Home Affordable Modification Program.” NBER Working Paper Series, No. 18311, August 2012. Berkovec, J., G. Canner, S. Gabriel, and T. Hannan. “Race, Redlining, and Residential Mortgage Loan Performance.” Journal of Real Estate Finance and Economics, vol. 9, no. 3 (1994): 263-294. California Reinvestment Coalition (CRC), Race to the Bottom: An Analysis of HAMP Loan Modification Outcomes By Race and Ethnicity for California (July 2011). Cheng, P., Z. Lin, and Y. Liu. “Do Women Pay More for Mortgages?” Journal of Real Estate Finance and Economics, vol. 43, no. 4 (2011): 423-440. Collins, J., K. Lam, and C. Herbert. “State Mortgage Foreclosure Policies & Lender Interventions: Impacts on Borrower Behavior in Default.” Journal of Policy Analysis and Management, vol. 30, no. 2 (2011): 216- 232. Dietrich, J., Missing Race Data in HMDA and the Implications for the Monitoring of Fair Lending Compliance., Office of the Comptroller of the Currency, OCC Economics, Working Paper No. 2001-1 (March 2011) . GAO. Foreclosure Mitigation: Agencies Could Improve Effectiveness of Federal Efforts with Additional Data Collection and Analysis. GAO-12-296. Washington, D.C.: June 28, 2012. GAO. Troubled Asset Relief Program: Further Actions Needed to Fully and Equitably Implement Foreclosure Mitigation Programs. GAO-10-634. Washington, D.C.: June 24, 2010. GAO. Troubled Asset Relief Program: Home Affordable Modification Program Continues to Face Implementation Challenges. GAO-10-556T.Washington, D.C.: March 25, 2010. Huck, P. “Home mortgage lending by applicant race: Do HMDA figures provide a distorted picture?” Housing Policy Debate, vol. 12, no. 4 (2001): 719-736. Karikari, J. “Why Homeowners’ documentation went missing under the Home Affordable Mortgage Program (HAMP)?: An analysis of strategic behavior of homeowners and servicers.” Journal of Housing Economics, vol. 22 (2013a): 146-162. Karikari, J. “Race/Ethnicity, Gender and Redefault of HAMP Loan Modifications,” Paper presented at the annual meeting of the American Real Estate Society, Kohala Coast, HI, April 2013. Mayer, N. and M. Piven, Experience of People of Color, Women, and Low-Income Homeowners in the Home Affordable Modification Program. The Urban Institute, Washington, D.C., June 2012. National Community Reinvestment Coalition. HAMP Mortgage Modification Survey 2010 (Washington, D.C: 2010). Orlando, James. “Comparison of State Laws on Mortgage Deficiencies and Redemption Periods.” OLR Research Report, No. 2010-R-0327. Hartford, Conn.: Office of Legislative Research, December 9, 2011. Accessed December 22, 2012. http://www.cga.ct.gov/2010/rpt/2010-R- 0327.htm. Voicu, I., V. Been, M. Weselcouch, and A. Tschirart. “Performance of HAMP Versus Non-HAMP Loan Modifications—Evidence from New York City.” New York University Law and Economics Working Papers, No. 288 (2012). Zhang, Y. “Fair Lending Analysis of Mortgage Pricing: Does Underwriting Matter?” Journal of Real Estate Finance and Economics, vol. 46, no. 1 (2013): 131-151.
Treasury introduced MHA in February 2009 and indicated that up to $50 billion would be used to help 3 to 4 million struggling homeowners avoid potential foreclosure. Since then, questions have been raised about participation rates and the overall success of the program. The Emergency Economic Stabilization Act of 2008 requires GAO to report every 60 days on the Troubled Asset Relief Program (TARP) activities. This 60-day report examines (1) the status of MHA and steps Treasury has taken to increase program participation, (2) Treasury's oversight of the MHA-related fair lending internal controls of servicers, and (3) Treasury's and MHA servicers' policies and practices for ensuring that LEP borrowers have equal access to the program. For this work, GAO reviewed program documentation, analyzed HAMP loan-level data, and interviewed officials from Treasury, fair lending supervisory institutions, and the five largest MHA servicers. Participation rates in the Home Affordable Modification Program (HAMP), a key component of the Making Home Affordable program (MHA), peaked in early 2010, generally declined during 2011, and remained relatively steady from 2012 through November 2013. As of November 2013, about 1.3 million borrowers had entered into a HAMP permanent modification. Treasury has made several efforts to increase participation, such as extending the program deadline through December 2015, expanding program eligibility requirements, and initiating the MHA Outreach and Borrower Intake Project. This project provides funding to counseling agencies to help borrowers complete and submit MHA application packages. The project was scheduled to end in December 2013 but was recently extended through September 2014. Treasury requires MHA servicers to develop internal control programs that monitor compliance with fair lending laws (the Fair Housing Act and Equal Credit Opportunity Act) but has not assessed the extent to which servicers are meeting this requirement. Treasury noted that it shares HAMP loan-level data with the federal agencies responsible for fair lending enforcement. GAO's analysis of HAMP loan-level data for four large MHA servicers identified some statistically significant differences in the rate of denials and cancellations of trial modifications and in the potential for redefault between populations protected by fair lending laws and other populations. Such analysis by itself cannot account for all factors that could explain these differences. Reviewing the fair lending internal controls of MHA servicers could give Treasury additional assurance that servicers are complying with fair lending laws. Despite an Executive Order issued in 2000 and a 2011 Attorney General's memorandum regarding improving access to federal programs for limited English proficiency (LEP) persons, Treasury only recently developed LEP-related written guidelines and procedures for the MHA programs. Treasury has taken measures to reach out to these borrowers and requires servicers to have a policy for “effective relationship management” with LEP borrowers. However, Treasury has not provided any clarifying guidance to servicers on what such a policy should contain or assessed servicer compliance with this requirement. Housing counselors have noted that LEP borrowers continue to encounter language-related barriers in obtaining access to MHA program benefits. Without a comprehensive strategy that includes guidance for servicers on engaging with LEP borrowers and monitoring of servicers, Treasury cannot ensure that all potential MHA participants have equal access to program benefits. Because the MHA program provides direct outlays of taxpayer dollars, it is important that Treasury take appropriate steps to ensure that all eligible borrowers, including those whose primary language is not English, have access to MHA program benefits. Treasury should (1) assess the extent to which servicers have established internal control programs to monitor compliance with fair lending laws, (2) issue guidance to servicers on working effectively with LEP borrowers and (3) monitor servicers' compliance with the guidance. Treasury noted that it was considering GAO's recommendations and agreed that it should continue to strengthen its program. Treasury also provided technical comments that were incorporated into the report as appropriate.
SAFETEA-LU authorized a total of $45.3 billion for a variety of transit programs, including financial assistance to states and localities to develop, operate, and maintain transit systems from fiscal year 2005 through fiscal year 2009. Under one program, New Starts, FTA identifies and selects fixed guideway transit projects for funding—including heavy, light, and commuter rail; ferry; and certain bus projects (such as bus rapid transit). The New Starts program serves as an important source of federal funding for the design and construction of transit projects throughout the country. FTA generally funds New Starts projects through FFGAs, which establish the terms and conditions for federal participation in a New Starts project and also define a project’s scope, including the length of the system and the number of stations; its schedule, including the date when the system is expected to open for service; and its cost. For a project to obtain an FFGA, it must progress through a local or regional review of alternatives and meet a number of federal requirements, including requirements for information used in the New Starts evaluation and rating process (see fig. 1). As required by SAFETEA-LU, New Starts projects must emerge from a regional, multimodal transportation planning process. The first two phases of the New Starts process—systems planning and alternatives analysis—address this requirement. The systems planning phase identifies the transportation needs of a region, while the alternatives analysis phase provides information on the benefits, costs, and impacts of different corridor-level options, such as rail lines or bus routes. The alternatives analysis phase results in the selection of a locally preferred alternative—which is intended to be the New Starts project that FTA evaluates for funding, as required by statute. After a locally preferred alternative is selected, project sponsors submit a request to FTA for entry into the preliminary engineering phase. Following completion of preliminary engineering and federal environmental requirements, the project may be approved by FTA to advance into final design, after which the project may be approved by FTA for an FFGA and proceed to construction, as provided for in statute. FTA oversees grantee management of projects from the preliminary engineering phase through construction and evaluates the projects for advancement into each phase of the process, as well as annually for the New Starts report to Congress. To help inform administration and congressional decisions about which projects should receive federal funds, FTA assigns ratings on the basis of various financial and project justification criteria, and then assigns an overall rating. For the fiscal year 2007 evaluation cycle, FTA primarily used the financial and project justification criteria identified in TEA-21. These criteria reflect a broad range of benefits and effects of the proposed project, such as cost-effectiveness, as well as the ability of the project sponsor to fund the project and finance the continued operation of its transit system (see fig. 2). Projects are rated at several points during the New Starts process—as part of the evaluation for entry into preliminary engineering and final design, and yearly for inclusion in the New Starts annual report. FTA assigns the proposed project a rating for each criterion and then assigns a summary rating for local financial commitment and project justification. Finally, FTA develops an overall project rating. The exceptions to this process are statutorily “exempt” projects, which are those with requests for less than $25 million in New Starts funding. These projects do not have requirements for submitting project justification information—although FTA encourages their sponsors to do so—do not receive ratings from FTA and are not eligible for FFGAs; thus, the number of projects in preliminary engineering or final design may be greater than the number of projects evaluated and rated by FTA. As required by statute, the administration uses the FTA evaluation and rating process, along with the stage of development of New Starts projects, to decide which projects to recommend to Congress for funding. Although many projects receive a summary rating that would make them eligible for FFGAs, only a few are proposed for FFGAs in a given fiscal year. FTA proposes projects for FFGAs when it believes that the projects will be able to meet certain conditions during the fiscal year for which funding is proposed. These conditions include the following: All non-New Starts funding must be committed and available for the project. The project must be in the final design phase and have progressed to the point where uncertainties about costs, benefits, and impacts (e.g., environmental or financial) are minimized. The project must meet FTA’s tests for readiness and technical capacity, which confirm that there are no cost, project scope, or local financial commitment issues remaining. FTA’s Annual Report on New Starts: Proposed Allocations of Funds for Fiscal Year 2007 (annual report) identified 24 projects in preliminary engineering and final design (see fig. 3). FTA evaluated and rated 20 of these projects, and 4 projects were statutorily exempt from being rated because their sponsors requested less than $25 million in New Starts funding. FTA evaluated and rated fewer projects during the fiscal year 2007 cycle than in fiscal year 2006. According to FTA, this decrease occurred because 12 proposed projects are no longer in preliminary engineering or final design. FTA stated in its annual report that the sponsors of these projects have either (1) fully implemented the project; (2) received the total New Starts funding requested to implement the project; (3) terminated or suspended project development activities; (4) withdrawn from the New Starts process while they address outstanding issues; or (5) decided not to pursue New Starts funding. Of the 20 projects that were rated in the fiscal year 2007 evaluation cycle, 1 was rated as “high,” 17 were rated as “medium,” and 2 were rated as “low.” Under TEA-21, during fiscal years 2000 through 2006, FTA designated projects as highly recommended, recommended, or not recommended, based on the results of FTA’s evaluation of each of the criteria for project justification and local financial commitment. SAFETEA-LU replaced this rating scale with a 5-point scale of high, medium-high, medium, medium- low, and low. To help transition to the new rating scale, FTA used a 3-point scale of high, medium, and low for the fiscal year 2007 evaluation cycle, but used the same decision rules to determine overall project ratings as it did in previous years (see table 1). According to FTA officials, FTA intends to work closely with the industry to implement the SAFETEA-LU provisions so that they can be applied in subsequent annual project evaluation cycles. In addition, FTA’s current schedule anticipates that the final rule will be completed in time to use the 5-point scale for the fiscal year 2010 evaluation cycle. FTA’s evaluation process informed the administration’s recommendation to fund 12 projects. FTA recommended five projects for new FFGAs. The total capital cost of these five projects is estimated to be $3.3 billion, of which the total federal New Starts share is expected to be $1.9 billion. In addition, FTA recommended funding for two projects with pending FFGAs. The total capital cost of these two projects is estimated to be $8.2 billion, of which the total federal New Starts share is expected to be $2.8 billion. FTA also recommended reserving $101.9 million in New Starts funding for five “other projects.” In its annual report, FTA stated that four of the five other projects (1) were in or nearing final design, (2) received overall medium or higher ratings, and (3) had medium or better cost- effectiveness ratings, or (4) were exempt from the requirement to achieve a medium cost-effectiveness rating. According to FTA, no other project in preliminary engineering or final design met these criteria. The fifth project—Washington, D.C., Largo Metrorail Extension—did not meet these criteria but was congressionally designated for funding in SAFETEA- LU. Similar to last year, FTA did not specify funding levels for the five other projects because it wanted to ensure that the projects were moving forward as anticipated before making specific funding recommendations to Congress. FTA also notes in its annual report that some projects may encounter unexpected obstacles that slow their progress. For example, FTA stated that some of the projects must still complete the environmental planning process and address FTA-identified concerns related to capital costs or project scope. Reserving funds for these projects without specifying a particular amount for any given project will allow the administration to make “real time” funding recommendations when Congress is making appropriations decisions. FTA does not expect that all five other projects will be recommended for funding in fiscal year 2007. (See table 2 for more information about the 12 projects recommended for funding.) The administration’s fiscal year 2007 budget proposal requests that $1.47 billion be made available for the New Starts program. This total includes funding for 16 projects already under an FFGA. Figure 4 illustrates the planned uses of the administration’s proposed fiscal year 2007 budget for New Starts, including the following: $571.9 million would be shared among the 16 projects with existing $355 million would be shared between the 2 projects with pending FFGAs, $302.6 million would be shared by the 5 projects proposed for new FFGAs, $101.9 million would be shared by as many as 5 “other” projects to continue their development, and $100 million would be used for new Small Starts projects. In January 2006, FTA proposed nine procedural changes for the New Starts program beginning with the fiscal year 2008 evaluation cycle. These changes include linking the New Starts and NEPA planning requirements and processes and capping New Starts funding when projects enter the final design phase. FTA’s guidance states that these procedural changes are generally intended to improve the management of the New Starts process and to ensure the accuracy and consistency of the information submitted to the agency as part of the New Starts evaluation and rating process. According to FTA, these procedural changes do not alter the New Starts evaluation and rating framework, and they are not subject to the formal rule-making process. Table 3 summarizes the proposed procedural changes and FTA’s rationale for proposing these changes. As we have previously recommended and SAFETEA-LU now requires, FTA published its proposed procedural changes in policy guidance and sought public comments on them. FTA obtained comments on its proposals by asking sponsors to submit comments to the docket for up to 60 days. In addition, FTA held three New Starts/Small Starts Seminar and Listening Sessions (“listening sessions”) across the country. The listening sessions were intended to solicit comments from attendees on the implementation of New Starts and Small Starts provisions of SAFETEA- LU, as well as to share information about planning and project development activities for projects seeking New Starts funding. FTA received 41 written comments in response to these changes, including submissions from 33 transit agencies and government entities and 8 consultants, associations, and organizations. Most of the project sponsors and industry representatives we interviewed told us that they appreciated FTA’s efforts to obtain their input and to encourage an open discussion about the proposed changes. Similarly, FTA officials said that they were pleased with the volume of written comments they received from the docket and the strong attendance at the three listening sessions conducted in February and March 2006. Although the project sponsors and industry representatives were supportive of some proposals that they thought would improve the New Starts program, they also expressed a number of concerns about all of the changes. (See table 4 for a summary of these concerns.) For example, the commenters were generally supportive of FTA’s proposal to require sponsors to keep and update the information produced during alternatives analysis prior to each phase of project development until the FFGA is awarded, since this information is necessary for the before-and-after study. In contrast, most project sponsors and transit industry groups opposed FTA’s proposed certification of technical methods, planning assumptions, and project development procedures, citing concerns that such a certification would raise questions about professional liability and lead to potential federal prosecution, and noting that a single individual is typically not responsible for producing all the underlying assumptions used to develop cost estimates and ridership forecasts. On the basis of the comments received, FTA adopted four proposals, including the mandatory completion of NEPA scoping before entry into preliminary engineering (PE), the presentation of the New Starts information in the NEPA documents, the preservation of information for the before and after study, and the capping of New Starts funds upon approval into final design. For two of the four adopted proposals, FTA slightly revised its original proposals on the basis of the comments received. FTA did not adopt five proposals; however, FTA noted that it may revisit these proposed changes in the future. More recently, FTA hired a consulting firm to conduct an assessment of the New Starts project development process. According to FTA’s Deputy Administrator, the impetus for the review is to streamline the project development process while still ensuring that projects recommended for funding are delivered in a timely manner and stay within budget. We have previously reported that project sponsors have raised concerns about the number of changes FTA has made to the New Starts process, such as requiring project sponsors to prepare risk assessments, and the time and cost associated with implementing these changes. According to FTA, the results of the review may help inform the development of the Notice of Proposed Rulemaking (NPRM) for the New Starts program. SAFETEA-LU made a number of changes to the New Starts program, including establishing a new eligibility category, the Small Starts program, and identifying new evaluation criteria. The Small Starts program is intended to expedite and streamline the application and review process for small projects, but the transit community has questioned whether FTA would implement the program in a way that would do so. FTA has also proposed and sought public input on the new evaluation criteria and other possible changes to the New Starts program that would affect traditional New Starts projects. In addition, FTA identified possible implementation challenges, including how to distinguish between land use and economic development criteria in the evaluation framework. SAFETEA-LU introduced eight changes to the New Starts program, codified an existing practice, and clarified federal funding requirements. The changes include the creation of the Small Starts program and the introduction of new evaluation criteria, such as economic development. In addition, SAFETEA-LU codified FTA’s requirement that project sponsors conduct before and after studies for all completed projects. SAFETEA-LU also clarified the federal share requirements for New Starts projects. Specifically, SAFETEA-LU continues to require that the federal share for a New Starts project may be up to 80 percent of the project’s net capital project cost, unless the project sponsor requests a lower amount, and prohibits the Secretary of Transportation from requiring a nonfederal share of more than 20 percent of the project’s total net capital cost. This language changes FTA’s policy of rating a project as low if it seeks a federal New Starts share of more than 60 percent of the total cost. FTA had instituted this policy beginning with the fiscal year 2004 evaluation cycle in response to language contained in appropriation committee reports. Table 5 describes SAFETEA-LU provisions for the New Starts program and compares them with TEA-21’s requirements. FTA has taken some initial steps in implementing SAFETEA-LU changes. For example, in January 2006, FTA published the proposed New Starts policy guidance and, as will be discussed later in this report, the ANPRM for the Small Starts program. In addition, in the final policy guidance published in May 2006, FTA took steps to support its use of incentives for accurate cost and ridership forecasts and assessing contractors’ performance by requiring that projects requesting entry into PE submit information on the variables and assumptions used to prepare forecasts and the parties responsible for developing the different elements of the forecasts. FTA will continue to implement the changes outlined in SAFETEA-LU through the rule-making process over the next 1½ years. Specifically, in response to SAFETEA-LU changes, FTA is developing the NPRM for the New Starts and Small Starts programs. FTA plans to issue the NPRM in January 2007, with the goal of implementing the final rule in January 2008. Figure 5 shows a time line of FTA’s actual and planned implementation of SAFETEA-LU changes. The creation of the Small Starts program was a significant change made by SAFETEA-LU. The Small Starts program is a discretionary grant program for public transportation capital projects that (1) have a total cost of less than $250 million and (2) are seeking less than $75 million in federal Small Starts program funding. The Small Starts program is a component of the existing New Starts program that, according to the conference reports accompanying SAFETEA-LU, is intended to provide project sponsors with an expedited and streamlined evaluation and ratings process. Table 6 compares New Starts and Small Starts program statutory requirements. In January 2006, FTA published an ANPRM to give interested parties an opportunity to comment on the characteristics of and requirements for the Small Starts program. In its ANPRM, FTA suggested that the planning and project development process for proposed Small Starts projects could be simplified by allowing analyses of fewer alternatives for small projects, allowing the development of evaluation measures for mobility and cost- effectiveness without the use of complicated travel demand modeling procedures in some cases, and possibly defining some classes of preapproved low-cost improvements as effective and cost-effective in certain contexts. FTA also sought the transit community’s input on three key issues in its ANPRM, including eligibility, the rating and evaluation process, and the project development process. For each of these issues, FTA outlined different options for how to proceed and then posed questions for public comment. FTA’s ANPRM for Small Starts generated a significant volume of public comment. Members of the transit community were supportive of some proposals for the Small Starts program, but also had a number of concerns. In particular, the transit community questioned whether FTA’s proposals would, as intended, provide smaller projects with a more streamlined evaluation and rating process. As a result, some commenters recommended that FTA simplify some of its original proposals in the NPRM to reflect the smaller scope of these projects. For example, several project sponsors and industry representatives thought that FTA should redefine the baseline alternative as the “no-build” option and make the before-and-after study optional for Small Starts projects to limit the time and cost of their development. In addition, others were concerned that FTA’s proposals minimized the importance of the new land use and economic development evaluation criteria introduced by SAFETEA-LU, and they recommended that the measures for land use and economic development be revised. Since FTA does not plan to issue its final rule for the New Starts and Small Starts programs until early 2008, FTA issued final interim guidance for the Small Starts program in July 2006 to ensure that project sponsors would have an opportunity to apply for Small Starts funding and proposed projects could be evaluated in the upcoming cycle (i.e., the fiscal year 2008 evaluation cycle). The final interim guidance describes the process that FTA plans to evaluate proposed Small Starts projects to support (1) the decision to approve or disapprove their advancement to project development and (2) decisions on project construction grant agreements, including whether proposed projects are part of a broader strategy to reduce congestion. In addition, FTA introduced a separate eligibility category within the Small Starts program for “Very Small Starts” projects in the final interim guidance. Small Starts projects that qualify as Very Small Starts are projects that have all of the following elements: have substantial transit stations; include traffic signal priority and preemption, where appropriate; provide low-floor vehicles or level boarding; include branding of the proposed service; offer 10 minute peak and 15 minute off-peak headways or better while operating at least 14 hours per weekday; are in corridors with existing riders who will benefit from the proposed project and number more than 3,000 on an average weekday; and have a total capital cost of less than $50 million (including all project elements) and less than $3 million per mile (excluding rolling stock). According to the final interim guidance, FTA intends to scale the planning and project development process to the size and complexity of the proposed projects. Therefore, Very Small Starts projects will undergo a very simple and streamlined evaluation and rating process. For instance, according to the guidance, Very Small Starts projects are cost-effective and produce land use and economic development benefits commensurate with their costs; thus, if a project meets the Very Small Starts eligibility criteria, it will automatically receive “medium” ratings for land use and cost-effectiveness. Small Starts projects that do not meet all of the criteria for Very Small Starts projects will be evaluated and rated using a framework similar to that used for traditional New Starts projects, with the exception that fewer measures are required and their development is simplified. In particular, FTA’s evaluation and rating process for Small Starts will diverge from the traditional New Starts process in several ways. For example, the project’s cost-effectiveness will be rated based on a shorter time frame (i.e., opening year); other technically acceptable ridership forecasting procedures, besides traditional “four-step” travel demand models can be used; the opening year’s estimate of user benefits will be adjusted upward when determining a project’s cost-effectiveness; the financial and land use reporting requirements have been simplified; and the project’s economic development benefits and inclusion in a congestion reduction strategy will be considered an “other factor” in the evaluation process. In response to SAFETEA-LU, FTA identified possible changes to the New Starts program that would affect traditional New Starts projects in its January 2006 guidance. According to FTA, some SAFETEA-LU provisions could lead to changes in the definition of eligibility, the evaluation and rating process, and the project development process. (See app. II for a description of the different changes FTA is considering.) In the guidance, FTA outlined changes it is considering and solicited public input, through a series of questions, on the potential changes. For example, FTA identified two options for revising the evaluation and rating process to reflect SAFETEA-LU’s changes to the evaluation criteria. The first option would extend the current process to include economic development impacts and the reliability of cost and ridership forecasts. (See fig. 6.) Specifically, FTA suggested that economic development impacts and the reliability of forecasts simply be added to the list of criteria considered in developing the project justification rating. The second option would be to develop a broader process to include the evaluation criteria identified by SAFETEA-LU and to organize the measures to support a more analytical discussion of the project and its merits. (See fig. 7.) According to FTA, the second option would broaden the evaluation process beyond a computation of overall ratings based on individual evaluation measures and develop better insights into the merit of a project than are possible from using the quantified evaluation measures alone. In addition, the second option would also consider the major uncertainties associated with any of the information used to evaluate the project, such as ridership forecasts, cost estimates, projected land use, and other assumptions. According to FTA, understanding a project’s uncertainties is needed for informed decision making. In its guidance, FTA also identified potential challenges in implementing some SAFETEA-LU changes. In particular, FTA described the challenges of incorporating and distinguishing between two measures of indirect benefits in the New Starts evaluation process—land use and economic development impacts. For example, FTA noted that its current land-use measures (e.g., land-use plans and policies) indicate the transit- friendliness of a project corridor both now and in the future, but do not measure the benefits generated by the proposed project. Rather, the measures describe the degree to which the project corridor provides an environment in which the proposed project can succeed. According to FTA’s guidance, FTA’s evaluation of land use does not include economic development benefits because FTA has not been able to find reliable methods of predicting these benefits. FTA further stated that because SAFETEA-LU introduces a separate economic development criterion, the potential role for land use as a measure of development benefits becomes even less clear, given its potential overlap with the economic development criterion. In addition, FTA noted that many economic development benefits result from direct benefits (e.g., travel time savings), and therefore including them in the evaluation could lead to double counting the benefits FTA already measures and uses to evaluate projects. Furthermore, FTA noted that some economic development impacts may represent transfers between regions rather than a net benefit for the nation, raising questions of whether these impacts are useful for a national comparison of projects. To address some of the challenges, FTA suggested that an appropriate strategy might be combining land use and economic development into a single measure. In our January 2005 report on the costs and benefits of highway and transit investments, we identified many of the same challenges of measuring and forecasting indirect benefits, such as economic development and land-use impacts. For example, we noted that it is challenging to predict changes in land use because current transportation demand models are unable to predict the effect of a transportation investment on land-use patterns and development, since these models use land-use forecasts as inputs into the model. In addition, we noted that certain benefits are often double counted when evaluating transportation projects. In particular, indirect benefits, such as economic development, may be more correctly considered transfers of direct user benefits or economic activity from one area to another. Therefore, estimating and adding such benefits to direct benefits could constitute double counting and lead to overestimating a project’s benefits. Despite these challenges, experts told us that evaluating land use and economic development impacts is important since they often drive local transportation investment choices. To help overcome some of the challenges, experts suggested several potential solutions, including using qualitative information about the benefits rather than relying strictly on quantitative information and expanding the use of risk assessment or probability analysis in conjunction with economic analysis. For example, weather forecasters talk about the probability of rain rather than suggesting that they can accurately predict what will happen. This approach could illustrate that projects with similar rates of return have very different risk profiles and different probabilities of success. FTA’s second option for revising the New Starts evaluation process, which would consider qualitative information about the project and the project’s uncertainties, appear to be in line with these suggestions. FTA received a large number of written comments on its online docket in response to its proposed changes. (See app. II for common comments submitted for each proposed change.) While members of the transit community were supportive of some proposals, they expressed concerns about a number of FTA’s proposed changes. For example, a number of commenters expressed concerns about FTA’s options for revising the evaluation process, noting that both proposals deemphasized the importance of economic development and land use. For example, as described in FTA’s January 2006 guidance, land use would receive less weight in calculating the overall project rating in both proposals than it receives in the current process. Some commenters also noted that land use and economic development should not be combined into a single measure and that they should receive the same weight as cost-effectiveness in the evaluation and rating process. These commenters argued that combining land use and economic development into a single measure or assigning them less weight than cost-effectiveness serves to deemphasize these benefits. FTA’s New Starts program is in a period of transition. SAFETEA-LU made a number of significant changes to the program, and FTA is off to a good start in implementing these changes. Tough decisions and implementation challenges remain, however. For example, FTA must determine how to incorporate economic development into the evaluation process and implement the Small Starts program in the upcoming evaluation cycle. Through the issuance of the final interim guidance on the Small Starts program, FTA has acted to provide a streamlined evaluation process for small projects by simplifying the evaluation measures and introducing the Very Small Starts eligibility category. As the Small Starts program is implemented in the upcoming cycle, FTA officials will have the opportunity to determine whether the Small Starts program is sufficiently streamlined and whether the streamlined evaluation process provides adequate information to differentiate among projects for funding purposes. FTA will also have the opportunity to make necessary modifications to the Small Starts program as it learns through its experience in implementing the program and working to develop the final rule. Thus, the coming months will be a critical period for the New Starts program, as FTA works through these remaining decisions and implementation challenges to fully incorporate SAFETEA-LU changes. We provided a draft of this report to the Department of Transportation, including FTA, for review and comment. FTA officials provided technical clarifications, which we incorporated as appropriate. We are sending copies of this report to the congressional committees with responsibilities for transit issues; the Secretary of Transportation; the Administrator, Federal Transit Administration; and the Director, Office of Management and Budget. We also will make copies available to others upon request. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions on matters discussed in this report, please contact me on (202) 512-2834 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Individuals making key contributions to this report were Nikki Clowers, Assistant Director, Vidhya Ananthakrishnan, and Daniel Hoy. To address our objectives, we reviewed the administration’s fiscal year 2007 budget request, the Federal Transit Administration’s (FTA) annual New Starts report, FTA’s New Starts policy guidance and Small Starts Advanced Notice of Proposed Rulemaking (ANPRM), public comments received on FTA’s docket on New Starts and Small Starts, FTA’s fiscal year 2008 reporting instructions for the New Starts program, federal statutes pertaining to the New Starts program, and previous GAO reports. We also interviewed FTA officials and representatives from the American Public Transportation Association and the New Starts Working Group. In addition, we attended FTA’s New Starts/Small Starts Seminar and Listening Session with project sponsors in Washington, D.C., in March 2006. We also conducted semistructured interviews with the sponsors of five projects that were evaluated and rated in the fiscal year 2007 evaluation cycle, including Raleigh, Regional Rail System; Dallas, Northwest/Southeast Light Rail Transit MOS; Minneapolis, Northstar Corridor Rail; Philadelphia, Schuylkill Valley Metrorail; and Seattle, University Link Light Rail Transit Extension. We selected these projects because they represent different phases of project development (preliminary engineering and final design), received different overall project justification and finance ratings, varied in size based on the project’s total capital cost, received different levels of New Starts funding, and are geographically diverse. We obtained this information from FTA’s annual New Starts report for fiscal year 2007. Our interviews were designed to gain project sponsors’ perspectives on three main topics, including the impact of FTA’s proposed changes to the New Starts application and project development process during the fiscal year 2008 evaluation cycle, FTA’s implementation of the newly established Small Starts program, and FTA’s plans to align and revise its evaluation and ratings process with the changes required by the Safe, Accountable, Flexible, Efficient Transportation Equity Act: A Legacy for Users (SAFETEA-LU). Specifically, we asked for their opinions on how FTA plans to measure and weight new criteria in its evaluation framework. We provided all project sponsors with a list of topics and questions prior to our interviews, and we reviewed the comments they submitted to FTA’s docket. Because the five projects were selected as part of a nonprobability sample, the results cannot be generalized to all projects. In addition to our interviews, we analyzed the content of the comments submitted to FTA’s docket on the New Starts policy guidance and the Small Starts ANPRM to systematically determine the project sponsors’ views on key issues and identify common themes in their responses to different questions. We received from FTA a summary of all the written comments submitted to the docket on both the Small Starts ANPRM and the New Starts guidance on policies and procedures. These comments were organized by topic. To verify the accuracy of the summaries, we checked 20 percent of the comments against the original source documents. Two analysts reached consensus on the coding of the responses, and a third analyst was consulted in case of disagreement to ensure that our codes were reliable. To ensure the reliability of the information presented in this report, we interviewed FTA officials about FTA’s policies and procedures for compiling the New Starts annual reports, including FTA’s data collection and verification practices for New Starts information. Specifically, we asked the officials whether their policies and procedures had changed significantly since we reviewed them for our 2005 report on New Starts. FTA officials told us that there were no significant changes in their data collection and verification policies and procedures for New Starts information. Therefore, we concluded that the FTA information presented is sufficiently reliable for the purposes of this report. We conducted our work from February 2006 through August 2006 in accordance with generally accepted auditing standards, including standards for data reliability. In its January 2006 guidance, FTA identified possible long-term changes to the New Starts program. According to FTA, some of these changes were driven by SAFETEA-LU, while others were designed to improve the New Starts program or correct past problems. Table 7 summarizes FTA’s proposed changes to the definition of eligibility, the evaluation and rating process, and the project development process as well as FTA’s rationale for the proposed changes and the transit community’s response to the proposed changes.
The Safe, Accountable, Flexible, Efficient Transportation Equity Act: A Legacy for Users (SAFETEA-LU) authorized about $7.9 billion in commitment authority, through fiscal year 2009, for the Federal Transit Administration's (FTA) New Starts program, which is used to select fixed guideway transit projects, such as rail and trolley projects, and to award full funding grant agreements (FFGAs). The New Starts program serves as an important source of federal funding for the design and construction of transit projects throughout the country. SAFETEA-LU requires GAO to report each year on FTA's New Starts process. As such, GAO examined (1) the number of projects that were evaluated, rated, and proposed for FFGAs for the fiscal year 2007 evaluation cycle and the proposed funding commitments for the fiscal year 2007 budget; (2) procedural changes that FTA proposed for the New Starts program beginning with the fiscal year 2008 evaluation cycle; and (3) changes SAFETEA-LU made to the New Starts program and FTA's implementation of these changes. GAO reviewed New Starts documents and interviewed FTA officials and project sponsors, among other things, as part of its review. GAO is not making recommendations in this report. In commenting on a draft of this report, FTA provided technical clarifications, which we incorporated as appropriate. For the fiscal year 2007 evaluation cycle, FTA evaluated and rated 20 projects, recommended 5 projects for new FFGAs and 2 projects with pending FFGAs. FTA also identified 5 other projects that may be eligible for funding outside of FFGAs. The administration's fiscal year 2007 budget proposal requests $1.47 billion for the New Starts program, which is about $200 million more than the amount received last year. FTA proposed nine procedural, or nonregulatory, changes for the New Starts program beginning with the fiscal year 2008 evaluation cycle that were generally intended to improve the management of the New Starts process. These changes include linking the New Starts and National Environmental Policy Act planning requirements and processes and capping New Starts funding when projects enter the final design phase. As required by SAFETEA-LU, FTA published these proposals in policy guidance and sought public input. Members of the transit community supported changes that they thought would make the New Starts process more efficient, but many commenters expressed strong opposition to other changes, citing, for example, the time and resources required to analyze ridership and cost uncertainties. Consequently, FTA implemented only 4 of the proposed procedural changes, but indicated that a final decision on the other 5 proposed changes would be made through the rulemaking process. SAFETEA-LU introduced eight statutory changes to the New Starts program that include establishing the Small Starts program and identifying new evaluation criteria. FTA has taken some initial steps to implement these changes, including issuing an Advanced Notice of Proposed Rulemaking (ANPRM) for the Small Starts program and proposed policy guidance for the New Starts program, both in January 2006. The Small Starts program is a new component of the New Starts program and is intended to offer an expedited and streamlined application and review process for small projects. The transit community, however, questioned whether the Small Starts program, as outlined in the ANPRM, would provide such a process. In July 2006, FTA introduced a new eligibility category called Very Small Starts, which is for the simplest and least costly projects. Very Small Starts projects will qualify for an even simpler and more expedited evaluation process. FTA also identified and sought public input on possible changes to the New Starts program that would have an impact on traditional New Starts projects, such as revising the evaluation process to incorporate the new evaluation criteria identified by SAFETEA-LU. According to FTA, a potential challenge in moving forward is incorporating both land use and economic development as separate criteria in the evaluation process, including developing appropriate measures for the criteria and avoiding duplication in counting benefits.
The Forest Service is responsible for managing over 192 million acres of public lands—nearly 9 percent of the nation’s total surface area and about 30 percent of all federal lands in the United States. In carrying out its responsibilities, the Forest Service traditionally has administered its programs through nine regional offices, 155 national forests, 20 grasslands, and over 600 ranger districts (each forest has several districts). The Forest Service’s implementation, management, and oversight of fuel reduction activities tend to be decentralized and vary by region, although all activities must be carried out under applicable laws. Figure 1 shows a map of the national forests and Forest Service regions. Forest Service projects intended to reduce fuels and restore or maintain desired vegetation conditions generally use prescribed burning, in which fires are deliberately set by land managers, and/or mechanical treatments, in which equipment such as chain saws, chippers, bulldozers, or mowers is used to cut vegetation. Such mechanical treatment may include logging to remove commercial timber. Other approaches include applying chemical herbicides, using grazing animals such as cattle and goats, and allowing the public to remove firewood by hand. To carry out its fuel reduction work, the Forest Service may use agency staff but more commonly contracts it out. The agency generally uses three types of contracts— timber sale contracts, service contracts, and stewardship contracts—to accomplish fuel reduction work. Timber sale contracts are awarded to individuals or companies to harvest and remove trees from federal lands under its jurisdiction. Service contracts are awarded to contractors to perform specific tasks, such as thinning trees or clearing underbrush. Stewardship contracts are generally awarded to contractors who perform both timber harvesting and service activities, and include contracts under which the agency uses the value of commercial products, such as timber, to offset the cost of services received, such as thinning, stream improvement, and other activities. Controversy has surrounded the issue of fuel reduction for some time, particularly in areas where federal lands surround or are adjacent to human development and communities—the wildland-urban interface—and in inventoried roadless areas. Roadless areas have received special attention for decades, as some argue that these areas should be available for appropriate development and timber harvesting, while others believe that the areas should remain roadless to preserve the special values that their condition provides, such as clean water and undeveloped wildlife habitats. Forest Service hazardous fuel reduction activities are typically subject to one of two different internal administrative review processes, each of which has a specific procedure through which the public can challenge the agency’s decisions or proposed decisions to conduct the activities. Specifically: Postdecisional administrative appeals process. The Forest Service has provided an administrative appeals system for review of agency decisions, under certain circumstances, for over 100 years. Although the specific requirements of the appeals system have changed over the years, the Appeals Reform Act of 1993 established the appeals process pertinent to fiscal years 2006 through 2008––the time period covered by our review. When the Forest Service issues a public notice in a newspaper of record of a proposed action, the public has either 30 or 45 days to comment, depending on the type of NEPA analysis document prepared. Once the agency issues a decision, the public has 45 days to file appeals; however, only those individuals who were involved in the public comment process through submission of written or oral comments or by otherwise notifying the Forest Service of their interest in the proposed action may file an appeal. Once the 45-day time frame for filing appeals has expired, the Forest Service must review all appeals and issue a response to each within an additional 45 days. Appeals can result in decisions being affirmed, in which case the Forest Service can proceed with the project as planned, or in decisions being reversed in whole or in part, in which case the agency may revise or even cancel the affected activities. The official (known as the Appeal Deciding Officer) who determines the outcome of the appeal must be, at least, the next higher level supervisor of the individual who made the original decision. There is no further administrative review of the Appeal Deciding Officer’s decision by any other Forest Service or Department of Agriculture official. The types of decisions that can be appealed have changed since GAO last reported on this issue in 2003. In 2003, the Forest Service added several new categorical exclusions related to vegetation management (including one specific to hazardous fuel reduction) that it exempted from appeal. However, as the result of subsequent litigation challenging these exemptions, the Forest Service ultimately was required to allow the public to appeal many (though not all) of these decisions during fiscal years 2006 through 2008, the time period covered by our current review. Predecisional administrative objection process. In 2003, HFRA required the Forest Service to establish an alternative process for authorizing certain hazardous fuel reduction projects, including an alternative predecisional objection process in lieu of the appeals process for certain projects. HFRA authorizes the public to file objections to a proposed project before the agency issues a final decision on the project, instead of the traditional appeals process where the administrative review occurs after the agency’s final decision has been made. According to the Forest Service, this objection process was intended to expedite the implementation of fuel reduction projects and to encourage early public input during the planning process. Only those parties who have previously submitted written comments specific to the proposed project may file objections. (The public has an opportunity to provide these written comments during scoping or other public comment periods.) The public must file objections with the reviewing officer—the next higher level supervisor of the person responsible for the proposed action—within 30 days following the publication date of the legal notice of the proposed environmental assessment or environmental impact statement. (Decisions that are subject to objection cannot use categorical exclusions as the basis for the decision.) If no objection is filed within the 30-day time period, the decision may be finalized on, but not before, the fifth business day following the end of the objection-filing period. If an objection is filed, the Forest Service must issue a written response to the objector addressing the objection within 30 days following the end of the objection-filing period. The reviewing officer may hold a meeting to discuss issues raised in the objection and any potential resolution. There are several ways the Forest Service addresses an objection. The objection can (1) be set aside from review, (2) be reviewed by the Forest Service resulting in a change to the final decision, (3) be reviewed by the Forest Service resulting in no change to the final decision, or (4) result in the reviewing officer directing the appropriate Forest Service official to complete additional analysis prior to issuing a final decision. An objection may be set aside from review for procedural reasons—if, for example, the objection is not received within the allowed 30-day time period, or the objecting individual or organization did not submit written comments during scoping or other public comment opportunities. There is no further administrative review by any other Forest Service or Department of Agriculture official of the reviewing officer’s written response to an objection. Table 1 compares the appeals and objection processes. Some decisions, however, were subject to neither the appeal nor the objection process during the time of our review. As noted, the Forest Service was required to allow appeals of many fuel reduction decisions based on categorical exclusions, but was not required to allow appeals on all such decisions—meaning that certain decisions based on categorical exclusions remained exempt from appeal. These decisions were also exempt from the objection process because HFRA requires that fuel reduction decisions subject to objection use environmental assessments or environmental impact statements rather than categorical exclusions. For fiscal years 2006 through 2008, national forest managers reported 1,415 decisions involving hazardous fuel reduction activities, affecting 10.5 million acres of national forest land. Most of these decisions were based on categorical exclusions, although decisions based on environmental assessments represented the most acreage of all decision types. Table 2 shows the number of decisions and associated acreage, by decision type. Appendix II provides greater detail on the number of decisions and associated acreage for each Forest Service region. Of the 1,415 decisions in our review, 1,191—about 84 percent—were subject to the appeals process. In contrast, only 121 decisions—8.5 percent—were subject to the objection process. However, the rate at which decisions subject to the objection process were challenged was higher than for decisions under the appeals process. Specifically, 40 percent of decisions subject to objection were objected to, compared with the 18 percent appeal rate for decisions subject to appeal. Table 3 shows, for all decisions covered by our review for fiscal years 2006 through 2008, the number of appeals, objections, and litigation associated with each decision type. Appendix III provides greater detail on the number of appeals, objections, and litigation for each Forest Service region. In addition to the introduction of the objection process, our survey data reflect two important changes that have occurred since our 2003 report: (1) the extent to which activities associated with categorical exclusions are subject to the appeals process and (2) the decrease in the use of the categorical exclusion for hazardous fuel reduction to authorize hazardous fuel reduction activities. Specifically: Extent to which categorical exclusions were subject to appeal. At the time of our 2003 report, decisions using categorical exclusions were generally not subject to appeal, and in that report we noted that 99 percent of fuel reduction decisions using categorical exclusions in fiscal years 2001 and 2002 were exempt from appeal. Also in 2003, the Forest Service introduced several new categorical exclusions that were exempt from appeal, including one categorical exclusion specific to fuel reduction activities. Beginning later that year, however, the agency’s ability to exempt decisions using categorical exclusions from appeal was challenged in court. As a result of this litigation, the Forest Service was required to allow the public to appeal decisions containing any of 11 types of categorically excluded activities, including fuel reduction—and thus, most fuel reduction decisions in our survey that were made using categorical exclusions were appealable by the public. Specifically, 89 percent of the categorical exclusions identified in our survey were subject to appeal in fiscal years 2006 through 2008, in contrast to the 1 percent that were subject to appeal during our 2003 review. The remaining 11 percent of categorical exclusions in our current review—a total of 103 decisions— were identified by survey respondents as exempt from appeal because they did not contain the activities covered by the litigation. Subsequently, in 2009 the U.S. Supreme Court overturned the lower court’s ruling on procedural grounds, allowing the Forest Service to utilize the provisions of its regulations that exempt categorically excluded decisions from appeal. Appendix X contains data on the type and frequency of the categorical exclusions represented in our survey. Decrease in the use of the categorical exclusion for hazardous fuel reduction. Although Forest Service regulations contain a specific categorical exclusion under which hazardous fuel reduction activities can be authorized, this was not the most commonly reported categorical exclusion in our survey of decisions involving hazardous fuel reduction activities. Instead, the most commonly reported categorical exclusion was one intended for timber stand and/or wildlife habitat improvement. Our survey data show that the total number of decisions authorized under the categorical exclusion for hazardous fuel reduction decreased greatly over the period covered by our survey, while at the same time, the use of the categorical exclusion for timber stand and/or wildlife habitat improvement increased. Specifically, use of the categorical exclusion for hazardous fuel reduction decreased from 214 in fiscal year 2006 to 28 in fiscal year 2008, while the use of the categorical exclusion for timber stand and/or wildlife habitat improvement increased from 145 in fiscal year 2006 to 167 in fiscal year 2008. This decrease in the use of the categorical exclusion for hazardous fuel reduction may have resulted in large part from the chief of the Forest Service’s response to a court order in 2007. In this response the chief directed that no new decisions should be made under the categorical exclusion for hazardous fuel reduction after December 2007. Furthermore, he directed that no additional contracts be initiated to implement projects authorized under this authority—meaning that projects that were not under way did not start, even if a final decision had already been issued. Under the chief’s direction, projects that were near completion could proceed. Of the 379 decisions in our survey originally authorized under the categorical exclusion for hazardous fuel reduction, respondents reported that 207—or about 55 percent—were affected by the chief’s directive. Although we did not systematically gather information on what happened to projects subject to the court decision, respondents indicated that they took a variety of approaches, including the following: using a different categorical exclusion, such as the categorical exclusion for timber stand and/or wildlife habitat improvement, to authorize the project; preparing an environmental assessment subject to the appeals process; stopping or slowing project implementation; and preparing an environmental assessment subject to the predecisional objection process, under HFRA. Additionally, the rate at which decisions were litigated was about the same—2 percent—for decisions that were subject to the Forest Service’s traditional appeals process as for decisions authorized under HFRA—even though the agency’s expectation was that HFRA would reduce the likelihood of litigation. Of the 29 litigated decisions in our study, 26 had been subject to appeal, representing 2 percent of the 1,191 decisions subject to appeal; the remaining 3 litigated decisions had been subject to objection, likewise representing 2 percent of the 121 decisions subject to objection. In fiscal years 2006 through 2008, of the 298 appeals filed, the Forest Service upheld its earlier decision in the majority of the cases without requiring any changes to the decision. Of the 101 objections submitted, the outcome was more evenly divided between those objections resulting in a change to the decision and those that did not. According to time frame information provided by survey respondents, all appeals and objections were processed within the prescribed time frames. For litigated decisions resolved at the time of our review, the Forest Service prevailed slightly more often than the plaintiffs. Of the 298 appeals filed on appealable decisions from fiscal years 2006 through 2008, For 160 appeals, the decisions were affirmed—that is, allowed to proceed—with no changes. For 22 appeals, the decisions were affirmed with specified changes. For 24 appeals, the decisions were reversed —that is, not allowed to proceed—based on issues raised by the appellants. A total of 91 appeals were dismissed for various reasons, including 38 appeals that were resolved informally, of which 30 appeals were withdrawn by the appellant and 8 decisions were withdrawn by the agency (when an appeal is resolved informally, changes may or may not be made to the decision); 53 appeals that were dismissed without review, mostly for failing to meet procedural requirements, such as timeliness—however, 23 of these appeals were dismissed without review because, subsequent to receiving the appeal, the agency official who made the decision decided to withdraw the decision; For 1 appeal, the outcome could not be determined based on documentation in the agency’s regional files, according to an agency official. According to time frame information provided by Forest Service officials, all appeals of fiscal year 2006 through 2008 decisions were processed within the time frames prescribed in applicable laws and regulations. See appendix IV for detailed information on appeal outcomes for each Forest Service region. The 298 appeals were filed by 217 appellants. This total includes appeals by 88 different interest groups, mostly environmental groups, and 129 individuals. Of the 88 interest groups, 10—Alliance for the Wild Rockies, Biodiversity Conservation Alliance, John Muir Project of the Earth Island Institute, Native Ecosystems Council, Oregon Wild, Ouachita Watch League, Sierra Club, The Lands Council, Utah Environmental Congress, and the WildWest Institute—each appealed 10 or more decisions. Appendix VI lists each interest group that appeared as an appellant in fiscal years 2006 through 2008 and the number of decisions for which each appellant filed appeals in each region. To protect the privacy of individual appellants, we do not list their names, but in appendix VI we provide information on the number of decisions appealed by individuals in each region. Of the 101 objections filed for 49 decisions from fiscal years 2006 through 2008, 38 objections resulted in no change to the final decision. 31 objections resulted in a change to the final decision. 4 objections resulted in the Forest Service having to conduct additional analysis. 15 objections were set aside from review. 13 objections were addressed some other way; for example, several agency respondents explained that they addressed objector’s concerns by both agreeing to make a change to the final decision and by setting the objection aside from review. Rather than setting it aside from review for procedural reasons, however, the decisions were set aside because the objector withdrew the objection after the Forest Service agreed to make changes to the final decisions. For objections that the Forest Service does not set aside, the Forest Service reviewing officer is required to respond in writing. Prior to issuing a written response, the objector or reviewing officer may request a meeting to discuss the issues that were raised in the objection and a possible resolution. According to some Forest Service officials we spoke with, these meetings have been used to further satisfy public concerns; however, because meetings are at the discretion of the reviewer, objectors with whom the reviewer decides not to meet may feel that their concerns were not adequately addressed, regardless of the outcome. For example, the Forest Service received 22 objections to the Middle East Fork Hazardous Fuel Reduction decision on the Bitterroot National Forest in west central Montana and east central Idaho, one of the first and, according to Forest Service officials, most contentious decisions authorized under HFRA authority in the Northern Region. One objector requested a meeting with the Forest Service and others expressed a willingness to meet, but the reviewing officer chose not to hold meetings, stating that their objections did not require additional clarification and that a private consultant with whom the forest contracted had determined that additional discussions would not resolve the objector’s concerns. The decision was ultimately litigated. In other cases, however, respondents reported that such meetings successfully addressed objectors’ concerns, sometimes resulting in objectors withdrawing their objections. However, we also determined that different regions follow different approaches in addressing objectors’ concerns. For example, an official in the Pacific Southwest Region told us that officials generally meet with the objectors associated with valid objections (those that are not set aside for procedural reasons), with the goal of informally resolving the objections and having them subsequently withdrawn by the objectors. In contrast, an official in the Northern Region told us that while the region seeks to resolve objections informally, unlike the Pacific Southwest Region, it does not seek to have objectors subsequently withdraw their objections, and none have done so. Seeking to have objectors withdraw their objections, as the Pacific Southwest Region has done, may have important implications for subsequent litigation because, according to Forest Service officials, under HFRA and its implementing regulations, an objector that withdraws an objection has no standing to obtain judicial review of the Forest Service’s final decision. According to time frame information provided by survey respondents, the final decisions for all proposals subject to the objection process from fiscal year 2006 through 2008 were signed in accordance with the time frames set forth by applicable laws and regulations. However, while officials are required to respond to objections within certain time frames, there is no limitation on the amount of time allowed to make a final decision. Of the 49 decisions for which objections were filed, 25 were signed between 35 days and 3 months of legal publication date of the proposed action. The remaining 24 were signed more than 3 months after the legal publication date, including 3 cases in which the final decision was signed more than a year after the legal publication date of the proposed action. The 101 objections were filed by 37 organizations and 41 individuals. Of the 37 organizations, 3—the Center for Biological Diversity, the Idaho Conservation League, and the WildWest Institute— each objected to 5 or more decisions. Appendix VI lists each group that filed objections in fiscal years 2006 through 2008 and the number of decisions for which objections were filed in each region. As with appeals, in appendix VI we do not list the names of individual objectors, but do show the number of proposed decisions objected to by individuals in each region. Of the 29 decisions that were litigated from fiscal years 2006 through 2008, we are able to report the outcome for 21 of the lawsuits because they had been resolved at the time of our review. According to regional officials, lawsuits for 3 of these 21 decisions were dismissed because the plaintiffs and the Forest Service agreed to settle their claims. District courts reached an outcome on the 18 additional decisions, with 8 decided favorably to the plaintiffs and 10 decided favorably to the Forest Service. Lawsuits on the remaining 8 decisions were continuing at the time of our review. In the 29 litigated decisions, 24 interest groups and 11 individuals were plaintiffs. The interest groups were primarily environmental groups, with three groups—Alliance for the Wild Rockies, Native Ecosystems Council, and the WildWest Institute—each acting as plaintiff in 5 or more decisions. Of the 29 litigated decisions, plaintiff groups and individuals had previously submitted appeals on 24 of the decisions and objections on 3 of the decisions during the administrative process. The remaining 2 litigated decisions were subject to appeal, but the plaintiffs did not submit an appeal during the administrative process. Appendix VI lists each group that acted as a plaintiff in fiscal years 2006 through 2008 and the number of decisions for which lawsuits were filed by each group within each Forest Service region. To protect the privacy of individual plaintiffs, we do not list their names, but in appendix VI provide information on the number of decisions litigated by individuals in each region. Prescribed burning was the most frequently used treatment method associated with the fuel reduction decisions included in our study, followed by mechanical treatment and commercial logging. Of these three methods, prescribed burning was the method most often challenged through appeals and objections; however, commercial logging was challenged at the highest rate, considering both appeals and objections. Table 4 shows, for all treatment methods in our study, the number and percentage of, and acreage associated with, appeals, objections, and litigation. Appendix VII provides additional information on fuel reduction methods used and the number of appeals, objections and lawsuits by treatment method, for each Forest Service region. Commercial timber sale contracts were the most frequent contract type used to implement the decisions included in our study, and were the type most often challenged through appeals and objections. Decisions using stewardship contracting, however, were challenged at a higher rate than the other contract types, considering both appeals and objections. Table 5 shows, for all the decisions included in our study, the number and percentage of contract types, and acreage associated with, appeals, objections, and litigation. Appendix VIII provides additional information on the contracting methods used for decisions included in our study and the appeal, objection, and litigation rates for each Forest Service region. Of the 1,415 decisions in our review, respondents identified 954 decisions that included activities in the wildland-urban interface and 169 decisions that included activities in inventoried roadless areas. Both types of decision were appealed at about the same rate, while decisions involving inventoried roadless areas were objected to at a slightly higher rate than those involving the wildland-urban interface. Table 6 shows, for both wildland-urban interface and inventoried roadless areas, the number and percentage of, and acreage associated with, appeals, objections, and litigation. Regarding fuel reduction activities in inventoried roadless areas, the majority of decisions in our study involved no road construction in the roadless area––which is a primary concern related to hazardous fuel reduction activities in roadless areas. About 10 percent included temporary road construction or other road construction activity, with one decision involving the construction of a permanent road in an inventoried roadless area. Appendix IX provides information on the number of decisions with fuel reduction activities in inventoried roadless areas and the number of appeals, objections, and lawsuits for such decisions in each Forest Service region. Much has changed since we last reported on appeals and litigation of fuel reduction activities 7 years ago. One of the most significant changes to the process has been the passage of HFRA, which has provided a new approach for public challenges of fuel reduction projects by allowing the opportunity to formally object to decisions before they become final, rather than waiting to file appeals until after the decisions are made. Although the passage of HFRA was seen as an important new tool for streamlining fuel reduction decisions, our review indicates that the impact of the act appears to be limited. Most notably, fuel reduction decisions that used HFRA authority represented less than 10 percent of decisions signed during fiscal years 2006 through 2008. As a result, despite the opportunities HFRA introduced for a new approach to the administrative review process, in practice most decisions remained subject to the Forest Service’s traditional postdecisional appeals process. In addition, although the agency’s expectation was that HFRA would reduce litigation of fuel reduction decisions, our review shows that HFRA and non-HFRA decisions were litigated at about the same rate of 2 percent. Another area of ongoing change is the dispute over the Forest Service’s ability to exempt categorically excluded decisions from appeal. Although most of these decisions were subject to appeal during the years we examined, the Supreme Court’s 2009 ruling means that the regulation exempting categorically excluded decisions from appeal is once again in effect. However, two factors suggest ongoing uncertainty about this issue. First, the Supreme Court’s ruling was made on procedural grounds rather than on the merits of the case—meaning that the court did not rule on whether the regulation is consistent with the Appeals Reform Act, allowing for the possibility of future challenges to the regulation. Second, even though the regulation survived the recent lawsuit, the Forest Service is considering changes to it in light of, among other things, the litigation it has engendered. Thus, the ultimate fate of the regulation—and the public’s ability to appeal categorically excluded decisions—remains uncertain. We provided a draft of this report to the Forest Service for comment. The Forest Service did not provide comments, although it did provide technical corrections which we incorporated as appropriate. We are sending copies of this report to the Secretary of Agriculture; the Chief of the Forest Service; appropriate congressional committees; and other interested parties. The report also will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix XIII. We examined (1) the number and type of Forest Service decisions involving hazardous fuel reduction activities signed in fiscal years 2006 through 2008; (2) the number of these decisions that were objected to, appealed, or litigated, and the acreage associated with those decisions; (3) the outcomes of these objections, appeals, and lawsuits, including whether they were processed within prescribed time frames, and the identities of the objectors, appellants, and plaintiffs; (4) the treatment methods and contract types associated with fuel reduction decisions, and how frequently the different methods and types were objected to, appealed, and litigated; and (5) the number of decisions involving hazardous fuel reduction activities in the wildland-urban interface (WUI) and inventoried roadless areas (IRA), and how frequently these decisions were objected to, appealed, and litigated. To address our objectives, we implemented a nationwide, Web-based survey of Forest Service officials, to collect information about all fuel reduction decisions signed in fiscal years 2006 through 2008 (See appendix XII for a copy of the survey). We supplemented the survey with a semistructured interview of officials in all nine Forest Service regions to gather additional details about time frames, outcomes and identities related to appeals and litigation of fuel reduction decisions. Details about this process are described below. To identify Forest Service decisions involving hazardous fuel reduction activities signed in fiscal years 2006 through 2008, we asked the agency’s Ecosystem Management Coordinator to query a Forest Service database designed to track decision planning, appeals, and litigation for all Forest Service decisions—-the Planning, Appeals, and Litigation System (PALS). This official queried the PALS database using the following criteria: (1) decisions signed in fiscal years 2006 through 2008, and (2) decisions that included fuels management as a purpose and/or one or more fuel treatment activities. This initial query identified 1,437 decisions in 108 national forest system units. Because PALS was not designed to include all information we sought as part of our review—including information on the number of acres treated, treatment methods and contract types used, and decisions involving activities in the wildland-urban interface or in inventoried roadless areas— we determined that a nationwide survey would be necessary. We began our survey effort by ensuring that we had identified the correct universe of fuel reduction decisions. After reviewing the list of fuel reduction decisions from PALS and correcting for any obvious duplication and other errors, we sent a list of each national forest’s fuel reduction decisions to the corresponding forest supervisor’s office. We asked the supervisor or cognizant official to verify the accuracy of our list, removing any decisions that did not meet our criteria (i.e, that were not signed in fiscal years 2006 through 2008, or that did not involve any hazardous fuel reduction activities), and adding decisions that met our criteria but did not appear in PALS. At this time, we also asked the supervisor or cognizant official to identify Forest Service employees most knowledgeable about these decisions. A total of 1,415 decisions, issued by 108 national forests, were determined to fit our criteria. We gave recipients 3 weeks to respond to our request for information and granted extensions as needed. We obtained a 100 percent response rate from the national forests. To determine the characteristics of each fuel reduction decision, we subsequently administered a Web-based survey to those Forest Service employees identified by each forest supervisor or cognizant official as most knowledgeable about the decisions at all 108 national forests that issued decisions with hazardous fuel reduction activities in fiscal years 2006 through 2008. Appendix XII contains a copy of the survey used to gather these data. The survey asked respondents to provide information about each of the decisions, including the type of environmental analysis used, acres involved, treatment methods and contract types used, the extent to which the decisions included activities in the wildland-urban interface and inventoried roadless areas, and detailed information about the outcomes of those decisions subject to the predecisional objection process. The Forest Service does not have a uniform definition of a hazardous fuel reduction activity, a fact that could affect the information that forest managers reported to us. Many activities have the practical effect of reducing fuels, but their stated purpose may be for something other than, or in addition to, fuel reduction. For example, the cutting and gathering of firewood or forest products to provide a product to the public may have the additional benefit of reducing hazardous fuels. Some forest managers may have included such projects among the decisions they reported in their responses to our survey, while other forest managers with similar decisions may not have included them. Similarly, there are a number of limitations to the acreage data. The data reported by forest managers include a mixture of planned, estimated, and actual treatment acres for decisions included in our review. In our survey, we did not limit responses to acres actually treated because once a decision is made and documented, there are many reasons that activities covered by the decision may be delayed or not implemented, including availability of funding or personnel, weather conditions, and administrative appeals or litigation. In addition, national forests may have submitted more than one decision with activities on the same area of land, or may have planned to use a series of different treatments on the same land. Therefore, the 10.5 million acres covered by decisions in our review may include overlapping acreage. Because this was not a sample survey, there are no sampling errors. However, the practical difficulties of conducting any survey may introduce errors, commonly referred to as nonsampling errors. For example, differences in how a particular question is interpreted, in the sources of information that are available to respondents, or in how the data are entered into a database or were analyzed can introduce unwanted variability into the survey results. We took steps in the development of the survey, the data collection, and data analysis to minimize these nonsampling errors. For example, prior to developing the data collection instruments, we met with Forest Service personnel at the headquarters, regional, and national forest levels to discuss the Forest Service decision- making, appeal, objection, and litigation processes. We also reviewed current policies, legislation, and court cases that are relevant to our questions and the analysis of the survey responses. Survey specialists designed the questionnaire in conjunction with GAO staff with subject matter expertise. The draft survey was then pretested with officials from four national forests in four different regions to ensure that the questions were relevant, clearly stated, and easy to comprehend. Upon receiving survey responses, we verified the accuracy of 5 percent of the surveys by comparing the responses to three survey questions against the decision documents used to complete the surveys, which were provided by respondents at our request. Using this approach, we verified 70 randomly selected decisions. Discrepancies between the survey responses and our data verification were discussed and resolved with the responsible forest official. In addition, we conducted follow-up to clarify ambiguous or incomplete responses that were identified through an internal logic test of all submitted responses. Through our data verification process, we determined that the data submitted were generally reliable. To gather specific details about the outcomes of appeals and litigation, we conducted semistructured interviews with regional appeals and litigation officials in each of the Forest Service’s nine regions. The semistructured interviews were used to gather information about each of the decisions that were appealed or litigated, including related dates, status and outcomes of administrative and court decisions, and the identities of the appellants and litigants. Information collected through these semistructured interviews was also verified for a randomly selected sample of decisions. We verified the accuracy of about 10 percent of the appealed decisions and about 50 percent of the litigated decisions by comparing the information provided in response to several interview questions against the administrative and court decision documents provided to us by interviewees at our request. Any discrepancies between the interview responses and the documents provided were discussed and resolved with the responsible regional official. Through our data verification process, we determined that the data gathered during the semistructured interviews were generally reliable. There are some limitations to the data we gathered. As with any survey, the information obtained from the national forests was self-reported, and we were not able to ensure that all decisions meeting our criteria were identified. In particular, we had no way to determine whether forests were fully reporting their hazardous fuel reduction activities. To get some indication of the completeness and accuracy of the data provided by Forest Service, we contacted several interest groups that, according to our data collection efforts, often appealed and objected to decisions or determinations. We asked these groups to verify the data pertaining to their appeals, objections, and litigation of Forest Service fiscal year 2006 through 2008 fuel reduction decisions and to identify any missing data. The groups generally agreed that the data provided by the agency were complete and accurate. In addition, during these interviews, we asked the groups for their perspectives on the administrative process for challenging decisions, including the objection process authorized under the Healthy Forests Restoration Act. The interviewees’ comments and perspectives are incorporated in this report. We conducted our work from October 2008 through February 2010, in accordance with all sections of GAO’s Quality Assurance Framework that are relevant to our objectives. The framework requires that we plan and perform the engagement to obtain sufficient and appropriate evidence to meet our stated objectives and to discuss any limitations in our work. We believe that the information and data obtained, and the analysis conducted, provide a reasonable basis for any findings and conclusions in this product. Figure 2 shows, for each of the Forest Service’s nine regions, the number of fuel reduction decisions and the total associated acreage. As shown, the Southern Region (Region 8) had the largest number of decisions and the largest acreage, while the Alaska Region (Region 10) had the fewest decisions and the smallest acreage. Figure 3 shows, for each of the Forest Service’s regions, information on appeals, objections, and litigation of fuel reduction decisions, including the total number of appeals, objections, and litigation and the percentage of decisions appealed, objected to, and litigated. The Southern Region (Region 8) had the highest combined total of decisions subject to appeal and objection; however, decisions in the Northern Region (Region 1) were challenged at the highest rate, considering both appeals and objections. Figure 4 shows, for each Forest Service region, the outcomes of appeals filed on fuel reduction decisions within the region. While six of the eight regions reporting appeal activity allowed the majority of appealed decisions to proceed without changes, the Southwestern Region (Region 3) had no appealed decisions that were allowed to proceed without changes and the highest rate of reversed decisions. Figure 5 shows, for each Forest Service region, the outcomes of litigation filed on fuel reduction decisions within the region. Six of the nine regions experienced litigation during the period covered by our survey. The Northern Region (Region 1) had the highest number of decisions judicially challenged as well as the greatest number of ongoing lawsuits. Tables 7, 8, and 9 list, by Forest Service region, the appellants, objectors, and litigants of fuel reduction decisions. We list the identities of organizations filing appeals, objections, and litigation, but summarize data on individuals to protect their privacy. As shown, organizations were most active in the Northern Region (Region 1) for appeals, objections, and litigation. Individuals were likewise most active in the Northern Region for objections, but were most active in the Eastern Region (Region 9) for appeals and litigation. Figure 6 shows, for each Forest Service region, the number of decisions using various fuel reduction treatment methods and the number and frequency of appeals, objections, and litigation by fuel reduction method. The rate at which treatment methods were used varied by region. For example, the Southern Region (Region 8) and the Eastern Region (Region 9) used prescribed burning more than any other treatment method, whereas the remaining regions used mechanical treatment the most. In addition, the Northern Region (Region 1) used commercial logging at a higher rate than any other region. Figure 7 shows, for each Forest Service region, the number of decisions using various contract types and the number and frequency of appeals, objections, and litigation by contract type. The use of different contract types varies among regions. The Eastern Region (Region 9) has the highest rate of commercial timber sale contract use compared with other regions, while the Rocky Mountain Region (Region 2) has the highest rate of stewardship contracting use. In this appendix, Figures 8 and 9 provide information about appeals, objections, and litigation of fuel reduction activities in the wildland-urban interface and in inventoried roadless areas. Figure 8 shows, for each Forest Service region, the number of decisions with fuel reduction activities in the wildland-urban interface and the number and frequency of appeals, objections, and litigation of such decisions by region. The Southern Region (Region 8) had the most decisions in the wildland-urban interface, while the Northern Region (Region 1) had the highest number of appeals and objections of such decisions, and the highest rate at which decisions were challenged, considering both appeals and objections. According to survey respondents, over half of these decisions (696) contained definitions of wildland-urban interface that were based on the definition provided in the January 4, 2001, Federal Register as refined by HFRA. HFRA Section 101 (16) defines wildland-urban interface as an area within or adjacent to a community that is identified as at risk in a community wildfire protection plan. In addition, for areas for which a community wildfire protection plan is not in effect, the definition in HFRA includes areas (1) extending 1/2 mile from the boundary of an at-risk community, or (2) within 1 1/2 miles of the boundary of an at-risk community, including any land that has for example, a sustained steep slope, a geographic feature that could help when creating an effective firebreak, or Condition Class 3 land, or (3) is adjacent to an evacuation route. Further, while many additional survey respondents who did not select this definition provided their own definition of wildland-urban interface, we found that 36 such respondents had definitions very similar to that contained in HFRA. Other respondents said they defined wildland-urban interface as it is referenced in their forests’ National Forest Land Management Plans. Others said they used a combination of definitions from multiple sources. For example, in the Pacific Southwest Region, several wildland-urban interface definitions were based on both the Federal Register and their forests’ National Forest Land Management Plans. Still others defined wildland-urban interface as an area within some distance from private land, or private lands with structures. The remaining respondents either said they did not have a definition for wildland-urban interface (14) or did not know the definition they used to identify the wildland-urban interface (49). Figure 9 shows, for each Forest Service region, the number of decisions with fuel reduction activities in inventoried roadless areas and the number and frequency of appeals, objections, and litigation of such decisions by region. The Intermountain Region (Region 4) had the most decisions with activities occurring in inventoried roadless areas and also the highest number of appeals, objections, and cases litigated. However, the Pacific Northwest Region (Region 6) had the highest rate at which decisions were challenged, considering both appeals and objections. A categorical exclusion (CE) is a category of actions for which neither an environmental assessment nor an environmental impact statement is required because the agency has determined that it does not individually or cumulatively have a significant effect on the quality of the human environment. Agencies develop a list of categorical exclusions specific to their operations when they develop or revise their implementing procedures for the National Environmental Policy Act (NEPA), in accordance with the Council on Environmental Quality’s NEPA regulations. When the Forest Service determines that activities of a proposed decision fall within a category of activities the agency has already determined have no significant environmental impact, it approves it using one of the predetermined categorical exclusions established by the Secretary of Agriculture or the Chief of the Forest Service. Table 10 shows the types and frequency of categorical exclusions reported in our survey. They are divided into two types: those that require the agency to prepare a decision memo for each action approved using a categorical exclusion, and those that do not require such documentation. A summary of the major litigation that affected the exemption of categorical exclusions from the requirements of the National Environmental Policy Act process is shown in table 11. Starting in late 2003, these exemptions were challenged in court and were the subject of a Supreme Court ruling. Table 12 summarizes the litigation centered specifically on the validity of the Hazardous Fuel Reduction categorical exclusion, or Fuels CE, also known as CE #10. In addition to the individual named above, Steve Gaty (Assistant Director), Ulana Bihun, Sandra Davis, Justin Fisher, Cathy Hurley, Richard P. Johnson, Stuart Kaufman, Armetha Liles, Diane Lund, Robin Nazzaro, Alison O’Neill, and Shana Wallace made key contributions to this report.
Increases in the number and intensity of wildland fires have led the Department of Agriculture's Forest Service to place greater emphasis on thinning forests and rangelands to reduce the buildup of potentially hazardous vegetation that can fuel wildland fires. The public generally has an opportunity to challenge agency hazardous fuel reduction decisions with which it disagrees. Depending on the type of project being undertaken, the public can file a formal objection to a proposed decision, or can appeal a decision the agency has already made. Appeals and objections must be reviewed by the Forest Service within prescribed time frames. Final decisions may also generally be challenged in federal court. GAO was asked, among other things, to determine, for fiscal years 2006-2008, (1) the number of Forest Service fuel reduction decisions and the associated acreage; (2) the number of decisions subject to appeal and objection, the number appealed, objected to, and litigated, and the associated acreage; and (3) the outcomes of appeals, objections, and litigation, and the extent to which appeals and objections were processed within prescribed time frames. In doing so, GAO conducted a nationwide survey of forest managers and staff, interviewed officials in the Forest Service's regional offices, and reviewed documentation to corroborate agency responses. GAO requested, but did not receive, comments from the Forest Service on a draft of this report. Through a GAO-administered survey and interviews, Forest Service officials reported the following information: (1) In fiscal years 2006 through 2008, the Forest Service issued 1,415 decisions involving fuel reduction activities, covering 10.5 million acres. (2) Of this total, 1,191 decisions, covering about 9 million acres, were subject to appeal and 217--about 18 percent--were appealed. Another 121 decisions, covering about 1.2 million acres, were subject to objection and 49--about 40 percent--were objected to. The remaining 103 decisions were exempt from both objection and appeal. Finally, 29 decisions--about 2 percent of all decisions--were litigated, involving about 124,000 acres. (3) For 54 percent of the appeals filed, the Forest Service allowed the project to proceed without changes; 7 percent required some changes before being implemented; and 8 percent were not allowed to be implemented. The remaining appeals were generally dismissed for procedural reasons or withdrawn before they could be resolved. Regarding objections, 37 percent of objections resulted in no change to a final decision; 35 percent resulted in a change to a final decision or additional analysis on the part of the Forest Service; and the remaining 28 percent were set aside from review for procedural reasons or addressed in some other way. And finally, of the 29 decisions that were litigated, lawsuits on 21 decisions have been resolved, and 8 are ongoing. Of the lawsuits that have been resolved, the parties settled 3 decisions, 8 were decided in favor of the plaintiffs, and 10 were decided in favor of the Forest Service. All appeals and objections were processed within prescribed time frames--generally, within 90 days of a decision (for appeals), or within 60 days of the legal notice of a proposed decision (for objections).
The Workforce Investment Act created a new, comprehensive workforce investment system designed to change the way employment and training services are delivered. When WIA was enacted in 1998, it replaced the Job Training Partnership Act (JTPA) with three new programs—Adult, Dislocated Worker, and Youth—that allow for a broader range of services, including job search assistance, assessment, and training for eligible individuals. In addition to establishing three new programs, WIA requires that a number of other employment-related services be provided through a one-stop system, designed to make employment and training services easier for job seeker customers to access. WIA also requires that the one- stop system engage the employer customer by helping employers identify and recruit skilled workers. While WIA gives states and localities flexibility in implementing these requirements, the law emphasizes that the one-stop system should be a customer-focused and comprehensive system. Such a system gives job seekers the job search and support services they need and provides services that better meet employers’ needs. (See fig. 1.) The major hallmark of WIA is the consolidation of services through the one-stop center system. Seventeen categories of programs—termed “mandatory partners”—with appropriations totaling over $15 billion from four separate federal agencies, are required to provide services through the system. (See table 1.) WIA allows flexibility in the way these mandatory partners provide services through the one-stop system, allowing co-location in one building, electronic linkages, or referrals to off-site partner programs. While WIA requires these mandatory partners to participate, WIA did not provide additional funds to operate one-stop systems and support one-stop partnerships. As a result, mandatory partners are expected to share the costs of developing and operating one-stop centers. Beyond the mandatory partners, one-stop centers have the flexibility to include other partners in the one-stop system. Labor suggests that these additional, or optional partners, may help one-stop systems better meet specific state and local workforce development needs. These optional partners may include TANF or local private organizations. States have the option of mandating particular optional partners to participate in their one-stop systems. For example, in 2001, 28 states had formal agreements between TANF and WIA to involve TANF in the one-stop system. In addition, localities may adopt other partners to meet the specific needs of the community. About $3.3 billion was appropriated in fiscal year 2003 for the three WIA programs—Adult, Dislocated Worker, and Youth. The formulas for distributing these funds to the states were left largely unchanged from those used to distribute funds under JTPA and are based on such factors as unemployment rates, including the number of long-term unemployed, and the relative number of low-income adults and youth in the population. In order to receive their full funding allocation, states must demonstrate the effectiveness of their three WIA programs by tracking and reporting a variety of performance measures. These performance measures gauge program results in the areas of job placement and retention, earnings change, skill attainment and customer satisfaction. WIA requires states to use Unemployment Insurance (UI) wage records to gather this information about WIA participants. States are held accountable by Labor for their performance in these areas and may suffer financial sanctions if they fail to meet their expected performance standards. WIA did not establish any comprehensive measures to assess the overall performance of the one- stop system. WIA also requires that training providers wishing to serve individuals’ training needs through WIA’s Adult and Dislocated Worker Programs meet key data reporting requirements, including completion rates, job placement rates, and wages at placement for all students they serve, including those not funded under WIA. WIA requires the collection of these outcome data so that job seekers receiving training can use them to make more informed choices about training providers. Unlike prior systems, WIA requires that individuals eligible for training under the Adult and Dislocated Worker Programs receive vouchers—called Individual Training Accounts—which they can use for the training provider and course offering of their choice, within certain limitations. WIA also requires these data so that states and localities can assess training providers’ performance. For example, a state might only allow training providers’ courses with an 80-percent completion rate to remain on the training provider list. If a course fails to meet that level, it would no longer be allowed to serve WIA-funded individuals. Finally, WIA called for the development of workforce investment boards to oversee WIA implementation at the state and local levels. At the state level, WIA requires, among other things, that the workforce investment board assist the governor in helping to set up the system, establish procedures and processes for ensuring accountability, and designate local workforce investment areas. WIA also requires that boards be established within each of the local workforce investment areas to carry out the formal agreements developed between the boards and each partner and oversee one-stop operations. WIA requires that private-sector representatives chair the boards and make up the majority of board members. This is to help ensure that the private sector is able to provide information on the available employment opportunities and expanding career fields and help develop ways to close the gap between job seekers and labor market needs. States and localities have found ways to use the flexibility in WIA to develop creative new ways to serve job seekers and employers. In particular, a group of 14 one-stops, identified as exemplary by government officials and workforce development experts for our study of promising one-stop approaches, has developed strategies for streamlining services for job seekers, engaging and serving employers, and building a solid one- stop infrastructure. All of the 14 centers in the study streamlined services for jobseekers by ensuring that they can readily access needed services, by educating program staff about all of the one-stop services available to job seekers, or by consolidating case management and intake procedures. In addition, to engage employers and provide them needed services, all of the centers used strategies that included dedicating specialized staff to work with employers or industries, tailoring services to meet specific employers’ needs, or working with employers through intermediaries, such as Chambers of Commerce or economic development entities. Finally, to provide the infrastructure needed to support better services for job seekers and employers, many of the one-stops we visited found innovative ways to develop and strengthen program partnerships and to raise additional funds beyond those provided under WIA. (Figure 2 shows the locations of the 14 one-stop centers we visited.) All of the one-stop centers in our recent study focused their efforts on streamlining services for job seekers by ensuring that job seekers could readily access needed services, educating program staff about all of the one-stop services available to job seekers, or consolidating case management and intake procedures. To ensure that job seekers could readily access needed services, one-stops we visited allocated staff to help them navigate the one-stop system, provided support to customers with transportation barriers, and expanded services for one-stop customers. For example, managers in Erie, Pennsylvania, positioned a staff person at the entrance to the one-stop to help job seekers entering the center find needed services and to assist exiting job seekers if they did not receive the services they sought. In addition to improving access to one-stop center services on-site, some of the one-stops we visited found ways to serve job seekers who may have been unable to come into the one-stop center due to transportation barriers or other issues. For example, in Boston, Massachusetts, the one-stop placed staff in off-site locations, including family courts, correctional facilities, and welfare offices, to give job seekers ready access to employment and program information. Finally, one-stops also improved job seeker access to services by expanding partnerships to include optional service providers—those beyond the program partners mandated by WIA. These optional partners ranged from federally funded programs, such as TANF, to community-based organizations providing services tailored to meet the needs of local job seekers. The one-stop in Dayton, Ohio, was particularly proactive in forming optional partnerships to meet job seekers’ service needs. At the time of our visit, the Dayton one-stop had over 30 optional partners on- site. To educate program staff about one-stop services, centers used cross- training sessions in order to inform staff about the range of services available at the one-stop. Cross-training activities ranged from conducting monthly educational workshops to a shadow program to help staff become familiar with other programs’ rules and operations. Officials in Salt Lake City, Utah, reported that cross–training improved staff understanding of programs outside their area of expertise and enhanced their ability to make referrals. The Pikeville, Kentucky, one-stop supported cross-training workshops in which one-stop staff from different partner programs educated each other about the range of services they could provide. After learning about the other programs, Pikeville staff collaboratively designed a service delivery flow chart that effectively routed job seekers to the appropriate service providers, providing a clear entry point and a clear path from one program to another. In addition, the Vocational Rehabilitation staff at the Pikeville one-stop told us that cross- training other program staff about the needs of special populations enabled them to more accurately identify hidden disabilities and to better refer disabled customers to the appropriate services. Centers also sought to reduce the duplication of effort across programs and the burden on job seekers navigating multiple programs by consolidating case management and intake procedures across programs through the use of shared service plans for customers and shared computer networks. Ten of the 14 one-stops we visited consolidated their intake processes or case management systems. This consolidation took many forms, including having case workers from different programs work as a team developing service plans for customers to having a shared computer network across programs. For example, in Blaine, Minnesota, caseworkers from the various one-stop programs met regularly to collaborate in developing and implementing joint service plans for customers who were co-enrolled in multiple programs. To efficiently coordinate multiple services for one-stop customers in Erie, Pennsylvania, one-stop staff used a networked computer system with a shared case management program, so that all relevant one-stop program staff could share access to a customer’s service plan and case file. In Kansas City, Missouri, the Youth Opportunity Program and the WIA Youth Program staff shared intake and used a combined enrollment form to alleviate the burden of multiple intake and assessment forms when registering participants. All of the one-stops we visited engaged and served employers by dedicating specialized staff to establish relationships with employers or industries, by working with employers through intermediaries, or by providing specially tailored services to meet employers’ specific workforce needs. One-stop officials told us that engaging employers was critical to successfully connecting job seekers with available jobs. In order to encourage employers’ participation in the one-stop system, specialized staff outreached to individual employers and served as employers’ primary point of contact for accessing one-stop services. For example, the one-stop in Killeen, Texas, dedicated specialized staff to serve not only as the central point of contact for receiving calls and requests from employers but also to identify job openings available through employers in the community. In addition to working with individual employers, staff at some of the one-stops we visited also worked with industry clusters, or groups of related employers, to more efficiently meet local labor demands—particularly for industries with labor shortages. For instance, the one-stop in Aurora, Colorado, dedicated staff to work with specific industries, particularly the healthcare industry. In response to a shortage of 1,600 nurses in the Denver metro area, the Aurora one-stop assisted in the creation of a healthcare recruitment center designed to provide job seekers with job placement assistance and healthcare-related training. In addition to dedicating specialized staff, all of the one-stops we visited worked with intermediaries to engage and serve employers. Intermediaries, such as a local Chamber of Commerce or an economic development entity, served as liaisons between employers and the one- stop system, helping one-stops to assess the workforce needs of employers while connecting employers with one-stop services. For example, the one-stop staff in Clarksville, Tennessee, worked with Chamber of Commerce members to help banks in the community that were having difficulty finding entry-level employees with the necessary math skills. To help connect job seekers with available job openings at local banks, the one-stop developed a training opportunity for job seekers that was funded by Chamber members and was targeted to the specific skills needed for employment in the banking community. Specialized staff at many of the one-stops we visited also worked with local economic development entities to recruit new businesses to the area. For example, the staff at the Erie, Pennsylvania, one-stop worked with a range of local economic development organizations to establish an employer outreach program that developed incentive packages to attract new businesses to the community. Finally, all of the one-stops we visited tailored their services to meet employers’ specific workforce needs by offering an array of job placement and training assistance designed for each employer. These services included specialized recruiting, pre-screening, and customized training programs. For example, when one of the nation’s largest cabinet manufacturers was considering opening a new facility in the eastern Kentucky area, the one-stop in Pikeville, Kentucky, offered a tailored set of services to attract the employer to the area. The services included assisting the company with pre-screening and interviewing applicants and establishing an on-the-job training package that could use WIA funding to offset up to 50 percent of each new hire’s wages during the 90-day training period. The Pikeville one-stop had responsibility for administering the application and assessment process for job applicants, including holding a 3-day job fair that resulted in the company hiring 105 people through the one-stop and a commitment to hire 350 more in the upcoming year. According to a company representative, the incentive package offered by the one-stop was the primary reason the company chose to build a new facility in eastern Kentucky instead of another location. To build the solid infrastructure needed to support better services for job seekers and employers, many of the one-stops we visited developed and strengthened program partnerships and raised funds beyond those provided under WIA. Operators at 9 of the 14 one-stops we visited fostered the development of strong program partnerships by encouraging communication and collaboration among partners through functional teams and joint projects. Collaboration through teams and joint projects allowed partners to better integrate their respective programs and services, as well as pursue common one-stop goals and share in one-stop decision-making. For example, partners at the Erie, Pennsylvania, one- stop center were organized into four functional teams—a career resource center team, a job seeker services team, an employer services team, and an operations team—which together operated the one-stop center. As a result of the functional team meetings, partners reported that they worked together to solve problems and develop innovative strategies to improve services in their respective functional area. One-stop managers at several of the sites in our study told us that the co- location of partner programs in one building facilitated the development of strong partnerships. For this reason, one-stop managers at several of the centers reported that they fostered co-location by offering attractive physical space and flexible rental agreements. For example, in Pikeville, Kentucky, the local community college donated free space to the one-stop on its conveniently located campus, making it easier to convince partners to relocate there. Partners were also eager to relocate to the Pikeville one- stop because they recognized the benefits of co-location for their customers. For instance, staff from the Vocational Rehabilitation Program said that co-location at the one-stop increased their customers’ access to employers and employment-related services. Several one-stops that did not co-locate found ways to create strong linkages with off-site partners. For example, in addition to regular meetings between on-site and off-site staff, the one-stop in Aurora, Colorado, had a staff person designated to act as a liaison and facilitate communication between on-site and off-site partners. Nationwide, co-location of partner services has been increasing since WIA was enacted. For example, in 2000, 21 states reported that Education’s Vocational Rehabilitation Program was co-located at the majority of their one-stops; this number increased to 35 states by 2001. Similarly, TANF work services were co-located in at least some one-stops in 32 states in 2000, increasing to 39 states by 2001. Managers at all but 2 of the 14 one-stops we visited said that they were finding ways to creatively increase one-stop funds through fee-based services, grants, or contributions from partner programs and state or local governments. Managers said these additional funds allowed them to cover operational costs and expand services despite limited WIA funding to support one-stop infrastructure and restrictions on the use of program funds. For example, one-stop operators in Clarksville, Tennessee, reported that they raised $750,000 in fiscal year 2002 through a combination of fee- based business consulting, drug testing, and drivers’ education services. Using this money, the center was able to purchase a new voicemail and computer network system, which facilitated communication among staff and streamlined center operations. Centers have also been proactive about applying for grants from public and private sources. For example, the one-stop center in Kansas City, Missouri, had a full-time staff person dedicated to researching and applying for grants. The one-stop generated two-thirds of its entire program year 2002 operating budget of $21 million through competitive grants available from the federal government as well as from private foundations. This money allowed the center to expand its services, such as through an internship program in high-tech industries for at-risk youth. One-stop centers also raised additional funds by soliciting contributions from local or state government and from partner agencies. For instance, the Dayton, Ohio, one-stop received $1 million annually from the county to pay for shared one-stop staff salaries and to provide services to job seekers who do not qualify for services under any other funding stream. Dayton one-stop partners also contributed financial and in-kind resources to the center on an as-needed basis. Despite the successes state and local officials are having as they implement WIA, some key aspects of the law, as well as Labor’s lack of clear guidance in some areas, have stymied their efforts. First, the performance measurement system is flawed—the need to meet certain performance measures may be causing one-stops to deny services to some clients who may be most in need of them; there is no measure that assesses overall one-stop performance; and the data used to measure outcomes are outdated by the time they are available and are, therefore, not useful in day-to-day program management. Second, funding issues continue to plague the system. The funding formulas used to allocate funds to states and local areas do not reflect current program design and has caused wide fluctuations in funding levels from year to year. In addition, WIA provided no separate funding source to support one-stop infrastructure and developing equitable cost sharing agreements has not always been successful, largely because of the limitations in the way funds for some of the mandatory programs can be spent. Third, the current provision for certifying training providers as eligible is considered overly burdensome by many providers and may reduce training options for job seekers as providers have withdrawn from the WIA system. Finally, state officials have told us that they need more help from Labor in the form of clearer guidance and instructions and greater opportunities to share promising practices in managing and providing services through their one- stop centers. The performance measurement system developed under WIA may be causing some clients to be denied services and does not allow for an accurate understanding of WIA’s effectiveness. First, the need to meet performance levels may be the driving factor in deciding who receives WIA-funded services at the local level. Officials in all five states we visited for one study told us that local areas are not registering many WIA participants, largely because local staff are reluctant to provide WIA- funded services to job seekers who may be less likely to find employment or experience earnings increases when they are placed in a job. For example, one state official described how local areas were carefully screening potential participants and holding meetings to decide whether to register them. As a result, individuals who are eligible for and may benefit from WIA-funded services may not be receiving services that are tracked under WIA. We found similar results in our studies of older workers and incumbent workers. Performance levels for the measures that track earnings change for adults and earnings replacement for dislocated workers may be especially problematic. Several state officials reported that local staff were reluctant to register already employed adults or dislocated workers. State and local officials explained that it would be hard to increase the earnings of adults who are already employed or replace the wages of dislocated workers, who are often laid off from high-paying, low-skilled jobs or from jobs that required skills that are now obsolete. In addition, for dislocated workers, employers may provide severance pay or workers might work overtime prior to a plant closure, increasing these workers’ earnings before they are dislocated. Many dislocated workers who come to the one-stop center, therefore, have earned high wages just prior to being dislocated, making it hard to replace —let alone increase —their earnings. If high wages are earned before dislocation and lower wages are earned after job placement through WIA, the wage change will be negative, depressing the wage replacement level. As a result, a local area may not meet its performance level for this measure, discouraging service to those who may need it. Second, outcomes are measured largely using unemployment insurance (UI) wage data, but these data suffer from time delays of up to as much as 14 months, making the data outdated by the time they are available. For example, we asked states in a survey we conducted in 2001, how quickly job placement outcome data would be available to them from UI wage records. We found that for 30 states, the earliest time period that job placement data would be available was 6 months after an individual entered employment, with 15 states reporting that it may take 9 months or longer. Similarly, over half of states reported that obtaining the necessary information on employment retention could take a year or longer. In fact, current available data on the wage-related measures reflects performance from the previous program year. While UI wage records are the best data source currently available for documenting employment, the lack of timely data makes it difficult for state and local officials to use the performance measures for short-term program management, including improving one- stop services. Some states and localities have developed other means, sometimes adding additional performance measures, to fill this information gap. Finally, there are no measures to gauge the performance of the one-stop system as a whole. At least 17 programs provide services through the one- stop system and most have their own performance measures. Although these performance measures may be used for assessing outcomes for individual programs, they cannot be used to measure the success of the overall system. For example, no program has a measure to track job seekers who use only self-service or informational activities offered through the one-stop, which may constitute a large proportion of job seekers. Not knowing how many job seekers use the one-stop’s services limits the one-stop’s ability to assess its impact. Furthermore, state and local officials told us that having multiple performance measures has impeded coordination among programs. There has been limited progress in developing overall performance measures for the one-stop system. Labor convened a working group in September 2001 to develop indicators of the one-stop system’s performance, but they have not yet issued them. As states and localities have implemented WIA, they have been hampered by funding issues, including flawed funding formulas and the lack of a funding source dedicated specifically to the one-stop infrastructure. We identified several issues associated with the current formulas. Formula factors used to allocate funds are not aligned with the target populations for these programs, there are time lags in the data used to determine these allocations, and there is excessive funding volatility associated with the Dislocated Worker Program that is unrelated to fluctuations in the target populations. As a result, states’ funding levels may not always be consistent with their actual need for services. In addition, no funding source exists with which to fund the one-stop infrastructure, and the volatile funding levels that states have experienced in the past 3 years have limited their ability to plan and develop their one-stop systems under WIA. Some of the factors used in the formulas to allocate funds are not clearly aligned with the programs’ target populations. For example, the Youth program targets a specific group of low-income youth with certain barriers to employment. However, two-thirds of its funds are distributed based on two factors that measure general unemployment rather than youth unemployment. The remaining third is distributed according to the number of low-income youth in states, but even this factor does not measure low-income youth who face barriers to employment. The target population and formula for the WIA Adult program also are misaligned. Basic services provided through the Adult program are open to all adults regardless of income, while low-income adults and public assistance recipients have priority for training and other more intensive services. However, the WIA Adult allocation formula is more narrowly focused on states’ relative shares of excess unemployment, unemployment in Areas of Substantial Unemployment (ASUs), and low-income adults. Finally, the Dislocated Worker Program is targeted to several specific categories of individuals, including those eligible for unemployment insurance and workers affected by mass layoffs. The factors used to distribute Dislocated Worker funds are not, however, specifically related to these populations. Two-thirds of program funds are distributed according to factors that measure general unemployment. One-third is distributed according to the number of long-term unemployed, a group that is no longer automatically eligible for the program. In addition to formula misalignment, allocations may not reflect current labor market conditions because there are time lags between when the data are collected and when the allocations are available to states. The oldest data are those used in the Youth and Adult program formulas to measure the relative numbers of low-income individuals in the states. The decennial Census is the source for these data, and allocations under this factor through 2002 are based on data from the 1990 Census. The data used to measure two of three factors for both the Youth and Adult programs are more recent, but are still as much as 12 months out of date. The time lags for the data used to calculate Dislocated Worker allocations range from 9 months to 18 months. Finally, funding for the Dislocated Worker Program suffers from excessive and unwarranted volatility—significantly more volatile, as much as 3 times more so, than funding for either the Youth or Adult program. Some states have reported that this volatility makes program planning difficult. While some degree of change in funding is to be expected due to changing dislocations in the workforce, changes in funding do not necessarily correspond to these changes. For example, changes in the numbers of workers affected by mass layoffs from year to year—one measure of dislocation activity—ran counter to changes in Dislocated Worker allocations in several states we examined. In New York, for example, dislocations due to mass layoffs increased by 138 percent in 2001, but funding allocations that year decreased by 26 percent. Conversely, in 1999, New York’s dislocations decreased by 34 percent, while funding allocations actually increased by 24 percent. Several aspects of the Dislocated Worker formula contribute to funding volatility and to the seeming lack of consistency between dislocation and funding. The excess unemployment factor has a “threshold” effect—states may or may not qualify for the one-third of funds allocated under this factor in a given year, based on whether or not they meet the threshold condition of having at least 4.5 percent unemployment statewide. As a result, small changes in unemployment can cause large changes in funding, and when the economy is strong and few states have unemployment over 4.5 percent, the states that do qualify for this pot of funds may experience large funding increases even if their unemployment falls. In addition, the Dislocated Worker formula is not subject to the additional statutory provisions that mitigate volatility in Youth and Adult program funding. These provisions include “hold harmless” and “stop gain” constraints that limit changes in funding to within 90 and 130 percent of each state’s prior year allocation and also “small state minimums” that ensure that each state receives at least 0.25 percent of the total national allocation. While these provisions prevent dramatic shifts in funding from year to year, they also result in allocations that may not as closely track changes in the program target populations. Developing alternative funding formulas to address the issues we have identified is an important but challenging task. This task is complicated by the need to strike an appropriate balance among various objectives, such as using formula factors that are best aligned with program target populations and reducing time lags in data sources, while also using available data sources to measure these factors as accurately as possible. In addition, there have been proposals for reauthorizing WIA that would substantially modify the program target populations and funding streams, which in turn would have consequences for revising the funding formulas. Many of WIA’s mandatory partners have identified resource constraints as a major factor in their ability to participate in the one-stops. In fact, the participants in a GAO-sponsored symposium identified insufficient funding levels as one of the top three WIA implementation problems. Labor also found that in many states, the agencies that administer the Employment Service program had not yet been able to co-locate within the one-stops. We were told by Employment Service officials and one-stop administrators we spoke with that this was often because they still had leases on existing facilities and could not afford to incur the costs of breaking those leases. Limited funding made it even more difficult to assign additional personnel to the one-stop or to devote resources to developing electronic linkages with the one-stop. In the states we visited, mandatory partners told us that limited funding was a primary reason that, even when they co-located staff at the one-stop, they did so on a limited basis. As a result, mandatory partners had to employ a wide range of methods to provide the required support for the operation of the one- stops. Across all the sites we visited for an early implementation study, WIA’s Adult and Dislocated Worker programs and, across most sites, Employment Service, were the only partners consistently making monetary contributions to pay for the one-stops’ operational costs. Other mandatory partners tended to make in-kind contributions—for example, Perkins and Adult Education and Literacy partners provided computer or GED training. Mandatory partners also noted that restrictions on the use of their funds can serve as another constraint affecting their ability to contribute resources to the one-stops. Some programs have caps on administrative spending that affect their ability to contribute to the support of the one- stop’s operations. For example, WIA’s Adult and Dislocated Worker programs have a 10-percent administrative cap that supports both the one- stops’ operation and board staff at the local level. In addition, as we have reported in the past, regulations often prohibit states from using federal program funds for acquisition of real property or for construction. This means partners, such as those carrying out Perkins, cannot provide funds to buy or refurbish a one-stop building. Moreover, Adult Education and Literacy and Perkins officials noted that under WIA they can only use federal funds for the purpose of supporting the one-stop, though only a small portion of their funds come from federal sources. Training options for job seekers may be diminishing rather than improving, as training providers reduce the number of course offerings they make available to WIA job seekers. According to training providers, the data collection burden resulting from participation in WIA can be significant and may discourage them from participating. For example, the requirement that training providers collect outcome data on all students in a class may mean calling hundreds of students to obtain placement and wage information, even if there is only one WIA-funded student in that class. Even if they used other methods that may be less resource-intensive, training providers said privacy restrictions might limit their ability to collect or report student outcome data. Training providers also highlighted the burden associated with the lack of consistency between the states use for WIA and for other mandatory partners. For example, the definition a state establishes for “program completer” for students enrolled in WIA can be different from the definition a state establishes for students enrolled in Education’s Carl D. Perkins Vocational Education Program (Perkins). Training providers find the reporting requirements particularly burdensome given the relatively small number of individuals who have been sent for training. Guidance from Labor and Education has failed to address how training providers can provide this information cost- effectively. In addition to challenges arising from implementing portions of the law, state and local officials often cite the need for more help from Labor in terms of clearer guidance and definitions and greater opportunities for information sharing. Although Labor has provided broad guidance and technical assistance to aid the transition from JTPA to WIA, some workforce officials have told us that the guidance has not addressed specific implementation concerns. Efforts to design flexible programs that meet local needs could be enhanced if Labor addressed the concerns of workforce officials with specific guidance and disseminated information on best practices in a timely manner. A number of our studies have recommended that Labor be more proactive and provide better guidance and clearer definitions on participant registration policies and on performance measure definitions to allow for accurate outcome tracking and better program accountability on how to better administer the WIA dislocated worker program, including how to provide additional assistance to local areas using rapid response funds on how to more effectively administer the WIA youth program, including how to recruit and engage parents, youth, and the business community; improve competition in contracts for services to youth; determine eligibility; and retain out-of-school youth on a definition of unliquidated obligations so that it includes funds committed at the point of service delivery, specifies what constitutes an obligation and the timeframe for recording an obligation in order to improve financial reporting. Labor has taken limited steps to respond to these recommendations. It has released revised guidance on the performance measurement system and has allowed states to revise their negotiated performance levels, which may address possible disincentives to serving certain job seekers. Labor is also currently finalizing guidance for state and local areas on services for dislocated workers. In response to our recommendations pertaining to the WIA Youth Program, Labor agreed to issue a toolkit on effective youth councils; reach out to new providers to enhance competition; simplify eligibility documentation; and develop a best practices Web site on serving out-of-school youth. In addition, Labor agreed with our findings and recommendations related to providing clearer definitions of unliquidated obligations; however, it declined to consider obligations in assessing WIA’s financial position. Finally, Labor has convened a one-stop readiness workgroup that included representatives from Education, HHS, and HUD. This group has developed a set of suggested strategies for addressing major WIA implementation issues and plans to disseminate a national issuance, signed by the heads of all the federal partner agencies, that would emphasize the commitment of these federal partners to the one- stop system. We have also recommended that Labor be more proactive in sharing various promising practices to help states and localities still struggling with implementation challenges. Our reports have recommended that Labor share promising practices in areas that include cost-effective methods of collecting training provider information, addressing the difficulties of using UI data in measuring outcomes, better ways to coordinate services for TANF clients through the one-stop, and better spending management strategies. While Labor has developed several mechanisms for providing guidance and allowing local one-stop administrators to share best practice information, these efforts have been limited. Labor is establishing a new unit within ETA—the Office of Performance and Results—whose function will be to coordinate efforts to identify and share promising approaches in areas such as the use of supplemental data sources to close gaps in UI data. In addition, Labor’s primary mechanisms for distributing information about promising practices at one-stop centers are a Web site, forums, and conferences. The promising practices Web site, in particular, represents a good step toward building a mechanism to support information sharing among one-stop administrators. However, neither Labor nor the Web site’s administrators have conducted a customer satisfaction survey or user evaluation of the site, so little is known about how well the site currently meets its objective to promote information sharing about promising practices at one-stop centers. In addition to the Web site, Labor cosponsors several national conferences to promote information sharing and networking opportunities for state and local grantees and stakeholders. Labor also hosted several forums during WIA implementation to allow information exchanges to occur between the department and state and local one-stop administrators. While these conferences and forums provide a venue for one-stop managers to talk with one another about what is and is not working at their centers, participation is limited to those who can physically take part. WIA represents a fundamental shift in the way federally funded employment and training services are delivered to job seekers and employers. It was, perhaps, a far more radical change than it initially appeared. But, in just under 3 years, states and localities have learned to embrace its flexibility, developing systems that meet local needs. They are doing what WIA envisioned—bringing on new partnerships and forging new relationships at all levels. They are actively working to engage the employer community and involve intermediaries and others to address the economic development needs of local communities. The process of implementation has not been perfect, but it is moving forward. Some aspects of the law that have caused difficulties may deserve attention during reauthorization. But, given the significant changes brought about by WIA, more time may be needed to allow a better assessment of what is working and what is not before making major changes in WIA’s structure. Mr. Chairman, this concludes my prepared statement. I will be happy to answer any questions you or other Members of the Subcommittee may have. For future contacts regarding this testimony, please contact Sigurd R. Nilsen at (202) 512-7215. Individuals making key contributions to this testimony included Dianne Blank, Elisabeth Anderson, Katrina Ryan, and Tamara Harris. Workforce Investment Act: One-Stop Centers Implemented Strategies to Strengthen Services and Partnerships, but More Research and Information Sharing is Needed. GAO-03-725. Washington, D.C.: June 18, 2003. Workforce Investment Act: Issues Related to Allocation Formulas for Youth, Adults, and Dislocated Workers. GAO-03-636. Washington, D.C.: April 25, 2003. Multiple Employment and Training Programs: Funding and Performance Measures for Major Programs. GAO-03-589. Washington, D.C.: April 18, 2003. Food Stamp Employment and Training Program: Better Data Needed to Understand Who Is Served and What the Program Achieves. GAO-03-388. Washington, D.C.: March 12, 2003. Workforce Training: Employed Worker Programs Focus on Business Needs, but Revised Performance Measures Could Improve Access for Some Workers. GAO-03-353. Washington, D.C.: February 14, 2003. Older Workers: Employment Assistance Focuses on Subsidized Jobs and Job Search, but Revised Performance Measures Could Improve Access to Other Services. GAO-03-350. Washington, D.C.: January 24, 2003. Workforce Investment Act: States’ Spending Is on Track, but Better Guidance Would Improve Financial Reporting. GAO-03-239. Washington, D.C.: November 22, 2002. Workforce Investment Act: States and Localities Increasingly Coordinate Services for TANF Clients, but Better Information Needed on Effective Approaches. GAO-02-696. Washington, D.C.: July 3, 2002. Workforce Investment Act: Youth Provisions Promote New Service Strategies, but Additional Guidance Would Enhance Program Development. GAO-02-413. Washington, D.C.: April 5, 2002. Workforce Investment Act: Better Guidance and Revised Funding Formula Would Enhance Dislocated Worker Program. GAO-02-274. Washington, D.C.: February 11, 2002. Workforce Investment Act: Improvements Needed in Performance Measures to Provide a More Accurate Picture of WIA’s Effectiveness. GAO-02-275. Washington, D.C.: February 1, 2002. Workforce Investment Act: Better Guidance Needed to Address Concerns Over New Requirements. GAO-02-72. Washington, D.C.: Oct. 4, 2001. Workforce Investment Act: Implementation Status and the Integration of TANF Services. GAO/T-HEHS-00-145. Washington, D.C.: June 29, 2000. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
This testimony highlights findings from today's report on strategies that exemplary one-stop centers have implemented to strengthen and integrate services for customers and to build a solid one-stop infrastructure. It also shares findings and recommendations from our past work on challenges that states and localities have experienced as they implement WIA, which may be helpful as WIA is reauthorized. The workforce development system envisioned under WIA represents a fundamental shift from prior systems, and barely 3 years have passed since it was fully implemented. States and localities have found ways to use the flexibility in WIA to develop creative new approaches to providing services through their one-stop systems. In particular, a group of 14 one-stops, identified as exemplary by government officials and workforce development experts, developed promising strategies in several key areas. To streamline services for job seekers, they ensured that job seekers could readily access needed services, made sure that staff were knowledgeable about all of the one-stop services available, or consolidated case management and intake procedures. To engage and serve employers, the centers dedicated specialized staff to work with employers or industries, tailored services to meet specific employers' needs, or worked with employers through intermediaries. To build a solid one-stop infrastructure, the centers found innovative ways to develop and strengthen program partnerships and to raise additional funds beyond those provided under WIA. Our work on WIA implementation over the past 3 years has identified a number of issues that should be considered during WIA reauthorization. First, the performance measurement system is flawed--the need to meet certain performance measures may be causing one-stops to deny services to some clients who may most need them; there is no measure that assesses overall one-stop performance; and the outcome data are outdated by the time they are available and are not useful in day-to-day program management. Second, funding issues continue to plague officials. The funding formula used to allocate funds to states and local areas does not reflect current program design and often causes unwarranted fluctuations in funding levels from year to year. In addition, WIA provided no separate funding source to support one-stop infrastructure, and developing equitable cost sharing agreements has not always been successful. Third, many training providers consider the current process for certifying their eligibility to be overly burdensome, resulting in reduced training options for job seekers as providers have declined to serve WIA-funded clients. Finally, state officials have told us that they need more help from Labor in the form of clearer guidance and greater opportunities to share promising practices in managing and providing services through their one-stop centers.
The Pick-Sloan Missouri Basin Program was authorized by the Flood Control Act of 1944 as a comprehensive plan to manage the water and hydropower resources of the Missouri River Basin. The act was a combination of two plans: (1) the Sloan Plan, developed by the Department of the Interior’s Bureau of Reclamation (Bureau) and designed primarily to irrigate lands in the Upper Missouri River Basin and (2) the Pick Plan, developed by the Department of the Army’s Corps of Engineers (Corps) and designed primarily to control floods and provide for navigation on the Lower Missouri River Basin. The program encompasses an extensive network of multipurpose projects that provide for, among other things, flood control, navigation, irrigation, municipal and industrial water supply, and power generation. (Fig. 1 shows the location of the program’s hydropower generating facilities and the overall program area.) To accomplish these multiple purposes, the plan required compromise among the program’s participants. For example, in exchange for having their land permanently flooded by dams to produce such benefits as electricity and flood control, some participants anticipated the construction of irrigation projects. The program is administered by three federal agencies: (1) the Bureau, which operates seven multipurpose projects and is responsible for the water supply functions of the program’s projects, (2) the Corps, which operates six multipurpose projects and administers the flood control and navigation aspects of the program’s projects, and (3) Western, which markets the hydropower generated at the program’s generating facilities and constructs, operates, and maintains the program’s power transmission system. The federal investment in the Pick-Sloan Program has nonreimbursable and reimbursable components. The nonreimbursable component consists of the capital costs of constructing, among other things, the program’s flood control and navigation facilities. The reimbursable component consists of the capital costs of constructing the program’s power generation and transmission, irrigation, and municipal and industrial water supply facilities. The reimbursable federal investment is further divided into investments repaid with interest (for power facilities and municipal and industrial water supply facilities) and investments repaid without interest (for irrigation facilities). Irrigation fees, power revenues, and other revenues are used to repay the federal investment in constructing irrigation facilities. Irrigation fees repay the portion of the investment in irrigation facilities that the Secretary of the Interior determines to be within the irrigators’ ability to pay. In general, power revenues are used to recoup both power costs and that portion of the investment determined to exceed the irrigators’ ability to pay. The Pick-Sloan Program accounted for about 33 percent of the operating revenues generated during fiscal year 1994 by the 14 separate programs from which Western markets and transmits power. In annual revenues from the sale and transmission of electric power, Pick-Sloan is Western’s second largest program. The total federal investment in the program as of September 30, 1994, was about $4.5 billion. About $2.6 billion of the federal investment in the program is reimbursable through power revenues, and about $898 million of that amount had been repaid through September 30, 1994. Because certain of the Pick-Sloan Program’s irrigation facilities will not be completed as planned, a portion of the federal investment is unrecoverable. As originally authorized in 1944, portions of the program’s power facilities and water storage reservoirs were intended for use with irrigation facilities. The federal investment for these portions was thus considered an investment in irrigation, and repayment was to be made without interest and deferred until the irrigation facilities were completed. As a result of this deferral, power customers would not be obliged to repay the investment in facilities that were ultimately intended for irrigation. Under the original plan, about 33 percent of the program’s generating capacity was to be used to irrigate about 5.3 million acres. As the program progressed, only about 15 percent of the program’s power capacity would be needed for irrigation because the acreage planned for irrigation was reduced to about 3.1 million acres. As of September 30, 1994, the federal investment in power facilities intended for use with existing and planned irrigation facilities was $286 million, or about 15 percent of the approximately $1.9 billion total for that purpose. In addition, a portion of the program’s water storage reservoirs were intended for use with existing and planned irrigation facilities. As of September 30, 1994, the capital cost associated with this portion of the reservoirs totaled about $224 million. Although the program’s power facilities and storage reservoirs have been largely completed as planned, most of the planned irrigation facilities have not been constructed. As of September 30, 1994, only about 25 percent of the acreage planned for irrigation had been developed. Some of the program’s power facilities and reservoirs are now being used in conjunction with those irrigation facilities that have been completed.As a result, the associated federal investment is now scheduled for repayment. Power facilities representing about $7 million of the federal investment are now being used to provide irrigation pumping service to about 212,000 acres, and water storage reservoirs representing about $49 million of the federal investment are now being used to provide irrigation water to about 182,000 acres. These investments are scheduled for repayment between 2042 and 2047, according to Bureau officials. These officials also stated that the remaining portions of the program’s power facilities and reservoirs, which are intended for use with future irrigation facilities, are currently used to generate electricity for sale to power customers. The Bureau now considers all but one of the program’s incomplete irrigation facilities to be infeasible and believes that these projects will likely not be constructed. According to Bureau officials, the costs of developing the remaining acreage planned for irrigation outweigh the benefits that would accrue from irrigating that acreage. They said that although their conclusions are based on preliminary estimates, a more expensive and time-consuming analysis would probably not change their conclusions. As a result, the remaining federal investment—$454 million— is deferred. In addition, the amount of the federal investment that is considered unrecoverable will increase over time. As mentioned earlier, the portion of the power facilities planned for use with irrigation facilities represents about 15 percent of the program’s overall power capacity. As the overall federal investment in power increases, the amount of the investment associated with irrigation increases correspondingly. For example, while the total federal investment in power facilities increased from about $1.6 billion at the end of fiscal year 1987 to about $1.9 billion at the end of fiscal year 1994, the corresponding 15-percent portion of this investment that was associated with irrigation increased from about $249 million to about $286 million. Legislation currently precludes reallocation of the investment by the Bureau and Western from one purpose of the program to another without congressional authorization. The DOE Organization Act of 1977 precludes revision by the Bureau of the cost allocations and project evaluation standards without prior congressional approval. The Water Resources Development Act of 1986 directed that the program proceed to its ultimate development. According to Western officials, these acts preclude changes in the program’s repayment criteria. The Congress reallocated a portion of the federal investment in power facilities and storage reservoirs intended for irrigation when it passed the Garrison Diversion Unit Reformulation Act of 1986. The act implemented recommendations in the Garrison Diversion Unit Commission’s Final Report, submitted to the Congress and to the Secretary of the Interior on December 20, 1984. The Commission was created by the Congress to review North Dakota’s needs for water development and to propose modification to the Garrison Diversion Unit. Among other things, the act terminated the development of about 876,000 of the acres planned for irrigation under the program. Also as a result of the act, Western scheduled repayment of the existing federal investment in the power facilities and storage reservoirs intended for use in irrigating this acreage. Thus, about $147 million in federal investment was reallocated for recovery through power revenues. The act directed that Western (1) attempt to minimize any rate increase and (2) phase in any such increase over a 10-year period. According to Western officials, because the investment is to be repaid over 50 years, the power rate was not appreciably affected by this reallocation of the federal investment. The impact of recovering the $454 million investment through power revenues could vary significantly depending on many factors, including the amount Western passes on to its power customers. Consistent with the way investments in power are typically repaid (within 50 years and with interest), recovering the full amount through power revenues could result in an increase in Western’s wholesale power rate of as much as 14.6 percent, according to Western’s calculations. Western officials said the following about this scenario: The potential rate increase of 14.6 percent assumes that the entire amount of the increased financial requirement would be passed through to existing power customers, without any offsetting reductions in the operating expenses of Western, the Corps, or the Bureau (any offsetting reductions could lessen the need for a rate increase). Western officials noted that such expenses could decrease as a result of Western’s ongoing restructuring efforts. Since Pick-Sloan’s power customers purchase power wholesale and resell it to retail customers, it is difficult to estimate accurately to what extent, if any, the retail customers would be affected by a rate increase at the wholesale level. Changes in the terms of repayment, such as phasing in a rate increase as was done in 1986, would lessen the effect of the increase. The estimated rate increase assumes repayment of the $454 million through power revenues without an overall assessment of the program. Any general assessment of the program could lead to changes in the current cost allocations and rates. Factors outside of Western’s control, such as the amount of water available for power generation, could affect any potential impact on the rates. The amount of the federal investment in storage reservoirs that would be redirected for repayment through power revenues is uncertain because some of this investment could be assigned to other program purposes, thereby lessening any effect on the rates. The Department of the Interior’s Inspector General reported in 1993 on the unrecoverable federal investment in the Pick-Sloan Program attributable to infeasible irrigation projects. Recognizing that the majority of the program’s irrigation facilities were infeasible and thus would likely never be completed, the report was critical of the Bureau’s continuing assumption that the project would ultimately be developed as planned. The Inspector General recommended, among other things, that the Bureau request that the Congress deauthorize—that is, terminate from the program—the infeasible acreage and reallocate the federal investment in the power facilities and storage reservoirs intended for planned irrigation facilities for repayment through power revenues. The Bureau concurred with the Inspector General’s recommendations and agreed to a target date of February 1995 for submitting information to the Congress in response to these recommendations. The Bureau provided us with a draft copy of the list of the infeasible irrigation facilities that it developed in response to the Inspector General’s report, but as of April 18, 1996, the Bureau had not yet submitted this information to the Congress.According to Bureau officials, the Bureau is continuing to analyze the potential alternatives for recovering the portion of the federal investment that is currently unrecoverable. For example, the Bureau is assessing the impact of reallocating the investment on the basis of the current use of the program’s facilities rather than on the program’s planned long-term development. The Inspector General’s 1993 report also identified another impact of recovering the $454 million through power revenues. Based on 1992 data, the Inspector General calculated, using a 7.25 percent interest rate, that carrying the unrecoverable federal investment in power facilities and storage reservoirs as an investment in irrigation facilities results in an interest cost to the Treasury of about $30 million annually because the investment is carried without interest. Repaying the unrecoverable federal investment through power revenues would necessitate an annual interest charge. Western officials noted that such an interest payment would likely be less than that calculated by the Inspector General because Western would expect to use a lower interest rate (likely 4 percent) that is based on the weighted average of the interest rates associated with the program’s outstanding debt. We provided a draft of this statement to and discussed its contents with the Bureau’s Regional Director of the Great Plains Region; the Bureau’s Washington Director, Policy and External Affairs; Western’s Acting Area Manager for the Pick-Sloan Program; and the Deputy Assistant Administrator from the Department of Energy’s Power Marketing Liaison Office. They clarified several points about the estimate of the potential impact on the power rate of recovering a portion of the federal investment through power revenues. These officials also suggested several technical revisions to our statement, which we incorporated as appropriate. We conducted our review between December 1995 and April 1996 in accordance with generally accepted government auditing standards. This concludes our prepared statement. It also concludes our work on this issue. Appendix I shows the operating characteristics of the Pick-Sloan Program’s hydropower generating facilities, appendix II shows the allocation of the reimbursable and nonreimbursable federal investment among the program’s purposes, appendix III shows the status of the federal investment reimbursable through power revenues, appendix IV shows the status of the program’s irrigation facilities. We will be glad to answer any questions you may have. Nameplate capacity (MW) Nonpower (irrigation assistance) The following four tables provide information on the status of the Pick-Sloan Program’s existing, planned, and reauthorized irrigation facilities as provided by the Bureau in its draft list. Table IV.1 summarizes all these facilities. Tables IV.2, IV.3, and IV.4 provide details on existing, planned, and reauthorized units, respectively. The benefit-cost ratios that appear in these tables reflect the Bureau’s calculation of the feasibility of developing the irrigation facilities. The ratio for an individual facility results from dividing the benefits expected to be derived from developing a facility by the expected cost of constructing and operating that facility. The Bureau considers a ratio exceeding 1.0 to indicate feasibility and a ratio of less than 1.0 to reflect infeasibility. We did not assess the accuracy of the information in the tables or in the notes, which were also provided by the Bureau. Capacity (kW) Capacity (kW) Benefit- cost ratio for irrigation and date of analysis (continued) Capacity (kW) Benefit- cost ratio for irrigation and date of analysis 1.47, Mar. 1956 1.67, Mar. 1952 1.22, Apr. 1956 (continued) Capacity (kW) In 1967, the benefit-cost ratio for this facility (0.55) indicated that the irrigation development was infeasible, and the irrigation storage in the Bonny Reservoir was sold to the state of Colorado for recreation purposes. The project was authorized for completion on the basis of all of its benefits. The Reclamation Projects Act deauthorized funding for irrigation at Cedar Bluff because of a lack of water. The irrigation district was relieved of its obligation, and the state of Kansas paid the costs of irrigation storage on a discounted present-value basis. The available water is used by the state for recreation, fish and wildlife, and supplemental municipal water supply. The Kirwin Unit Definite Plan Report (June 1952) showed a benefit-cost ratio of 0.9. Correspondence in September 1952 from the Acting Commissioner and the Regional Director, Lower Missouri Region, requested that intangible (indirect) benefits be included in the justification. Subsequent correspondence from the Regional Director provided the requested benefits that were included to justify the construction. The facility is part of the Three Forks Division, which has a total pumping demand of 3,199 kilowatts. Since Boysen storage is designated for water service, irrigation assistance reflects the currently unassigned storage costs for the reservoir. This facility has been integrated as part of the Bureau’s Rehabilitation and Betterment Program. Capacity (kW) (continued) Capacity (kW) (continued) Capacity (kW) 122,269 $84,210,328 $74,894,665 (Table notes on next page) Under the latest plan, the facility would use a hydraulic turbine from the Yellowtail Dam instead of electric pumps for irrigation pumping. Unit 2 is infeasible. Since Boysen storage is designated for water service, the irrigation assistance reflects the currently unassigned storage costs for the reservoir. Storage assignment for unsold water out of Glendo. The sale of Glendo water is being impeded by unresolved environmental concerns. Capacity (kW) These facilities were individually reauthorized by acts of Congress. The date a facility will be placed in service is indeterminate pending a finding of feasibility and reauthorization or, in the case of reauthorized but suspended facilities, a determination of the facility’s status or the disposition of the facility’s construction appropriations. The appropriation was deauthorized by P.L. 100-516, which authorized the Mni-Wiconi Rural Water Supply Project. No studies were conducted to determine the facility’s feasibility, but local interests suggested deauthorizing the irrigation development as a trade-off for developing a rural domestic water supply and distribution system to serve the needs of the Native American and non-Native American populations in the area. The power allocation for the facility was made available for the municipal and industrial system, and funding for irrigation was deauthorized. The irrigation facility was to remain as a planned facility of the Pick-Sloan Program. Approximately $1.1 million in federal investment in irrigation-related equipment had been expended on this facility as of September 30, 1994. This investment is currently categorized as construction-work-in-progress. The acreage for the Garrison Diversion Unit was reduced from 1,000,007 acres to 130,940 acres by P.L. 99-294 and to 115,740 acres by P.L. 102-575. At the time of the reformulation in 1986, it was recognized that the reduced scope of the project would result in economic infeasibility because of the loss of economies of scale and other factors. The reformulation was a compromise. Subsequent to the reallocation of the project’s costs, it was determined that the project was also financially infeasible because the annual operation and maintenance costs exceeded irrigators’ ability to pay. A team appointed by the Secretary of the Interior recommended a halt to further development of the project. Approximately $132.9 million in federal investment in irrigation-related equipment had been expended on this facility as of September 30, 1994. This investment is currently categorized as construction-work-in-progress. The facility was reauthorized by the Lake Andes-Wagner/Marty II Unit Act of 1992 (P.L. 102-575). The benefit-cost ratio for this unit was based on post-1979 methodologies. The Bureau employed “customized procedures” in calculating the ratio that allowed the consideration of a specialty crop (potatoes) as a benefit. The Planning Report/Draft Environmental Impact Statement (1985) included a benefit-cost ratio of 0.56. Under the customized procedures that included specialty crops, livestock intensification, and alternative price normalization, the benefit-cost was calculated at 1.02. On this basis, the Congress reauthorized the project. Approximately $3.7 million in federal investment in irrigation-related equipment had been expended on this facility as of September 30, 1994. This investment is currently categorized as construction-work-in-progress. Suspended; the assigned cost is for a Corps reservoir. Approximately $3.0 million in federal investment in irrigation-related equipment had been expended on this facility as of September 30, 1994. This investment is currently categorized as construction-work-in-progress. Approximately $7.1 million in federal investment in irrigation-related equipment had been expended on this facility as of September 30, 1994. This investment is currently categorized as construction-work-in-progress. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed Western Area Power Administration's (WAPA) repayment of the federal investment in hydropower facilities and water storage reservoirs in the Pick-Sloan Missouri Basin Program, focusing on the: (1) portion of the investment that may not be recoverable; and (2) actions that could be implemented to recover a larger portion of the investment. GAO noted that: (1) about $454 million of the Pick-Sloan investment is not recoverable because some of the facilities were intended for use with irrigation facilities that have not been completed or are no longer feasible; (2) Department of Energy (DOE) expects the amount of federal investment that is unrecoverable to increase, since some facilities will require renovation and replacement; (3) as a result of the completion of some irrigation facilities, DOE expects about $56 million of the federal investment to be repaid between 2042 and 2047; (4) some of the $454 million investment that is considered unrecoverable could be recovered through the WAPA hydroelectric power revenues; (5) the WAPA hydroelectric power revenues cannot be used to repay the federal investment without legislative changes; (6) the impact of recovering the investment through power revenues could vary significantly depending on the terms of repayment and amount that WAPA passes on to its customers; and (7) if the WAPA passed the entire investment amount on to its customers, power rates could increase by 14.6 percent.
In fiscal year 2008, the Army Reserve and Army National Guard had about 197,000 and 360,400 soldiers, respectively, comprising 51 percent of the total Army, which also includes the active component. The Army organizes, trains, and equips its reserve components to perform assigned missions. The Army Reserve is a federal force that is organized and trained primarily to supply specialized combat support and combat service support skills to combat forces. The Army National Guard is composed of both combat forces and units that supply support skills, but in contrast to the Army Reserve, the Army National Guard has dual federal and state missions. When not called to active duty for a federal mission, Army National Guard units remain under the command and control of the governors, typically training for their federal mission or conducting state missions. In addition, National Guard forces can be mobilized under Title 32 of the United States Code for certain federally funded, domestic missions conducted under the command of the governors such as providing security at the nation’s airports in the immediate aftermath of the September 11 terrorist attacks and assisting the Gulf Coast in the aftermath of Hurricane Katrina. Both reserve components are composed primarily of citizen soldiers who balance the demands of civilian careers with part-time military service. Reserve forces may be involuntarily called to active duty under three mobilization authorities. As shown in table 1, the President may involuntarily mobilize forces under two authorities with size and time limitations. Full mobilization, which would authorize the mobilization of forces for as long as they are needed, requires a declaration by Congress. In September 2001, following President Bush’s declaration of a national emergency resulting from the terrorist attacks of September 11, 2001, DOD issued mobilization guidance that, among other things, allowed the services to mobilize reservists for up to 24 cumulative months under the President’s partial mobilization authority. In January 2007, the Secretary of Defense issued updated guidance on the utilization of the force that, among other things, limits involuntary reserve component mobilizations to no more than 1 year at a time. During the Cold War, the Army’s reserve components principally operated as a force in reserve, or strategic reserve, that would supplement active forces in the event of extended conflict. Members of the reserves generally served 39 days a year—1 weekend a month and an additional 2 weeks of duty. In addition, the reserve components have a small number of full-time personnel, Active Guard and Reserve personnel and military technicians, that perform the necessary day-to-day tasks such as maintaining unit equipment and planning training events that reserve units need to accomplish in order to maintain readiness for their mission and be able to deploy. The Army’s resourcing strategy for a strategic reserve provided reserve units with varying levels of resources according to the priority assigned to their federal warfighting missions. Most reserve component units were provided with between 65 and 74 percent of their required personnel and 65 to 79 percent of their required equipment. This approach assumed that most reserve component forces would have a lengthy mobilization period with enough time to fully man, equip, and train their units after they were mobilized to attain the high level of operational readiness necessary for deployment. Since September 11, 2001, however, the demand for Army forces and capabilities has been high, especially to support ongoing operations in Iraq and Afghanistan. Recognizing that its forces were being stressed by the demands of lengthy and repeated deployments, the Army has adopted a new force-generation model intended to improve units’ readiness over time as they move through phased training to prepare to be ready for a potential deployment. This contrasts with the previous approach in which, as a strategic reserve, units’ personnel and equipment levels were maintained below warfighting readiness levels until they were mobilized. Under the Army’s new model, the early phases of the cycle will entail formation and staffing of the unit and beginning individual and collective training, while later phases will concentrate on larger unit training. Figure 1 illustrates the planned movement of units through the reset, train/ready, and available phases of the Army force-generation model. Under the Army’s force-generation model as designed, reserve component units would be available for deployment for 1 year with 5 years between deployments. After returning home from a deployment, units remain in the reset phase for a fixed 1-year period and focus on restoring personnel and equipment readiness so that they can resume training for future missions. Following the reset phase, units enter the train/ready phases in which they progressively increase their training proficiency by completing individual and collective training tasks. As designed in the force-generation model, reserve component units remain in the train/ready phases for 4 years, although the amount of time is not fixed and may be reduced to meet operational demands. Upon completion of the train/ready phases, units enter the available year in which they can be mobilized to meet specific mission requirements. Under current DOD policy, involuntary reserve component mobilizations are limited to no more than 1 year in length. The force-generation process requires increasing resources for units to use in training to gain higher levels of proficiency prior to mobilization. “The reserve components provide operational capabilities and strategic depth to meet U.S. defense requirements across the full spectrum of conflict. In their operational roles, reserve components participate in a full range of missions according to their Services’ force-generation plans. Units and individuals participate in missions in an established cyclic or periodic manner that provides predictability for the combatant commands, the Services, Service members, their families and employers. In their strategic roles, reserve component units and individuals train or are available for missions in accordance with the national defense strategy. As such, the reserve components provide strategic depth and are available to transition to operational roles as needed.” The Army has made a number of changes to its force structure, as well as to its manning and equipping strategies to better position its reserve components for the operational role. However, given the current high pace of operations, the Army has faced challenges in achieving sustainable mobilization rates for its citizen soldiers and in readying personnel and units before they are mobilized in order to maximize their availability to operational commanders after deployment. The Army has made four force-structure changes to better position its reserve components for the operational role. First, the Army is undertaking a major reorganization—called the modular force initiative—designed to make Army forces more flexible and responsive by reorganizing combat and combat support forces from a division-based force to smaller, more numerous, modular brigade formations with significant support elements. In contrast to the Army’s previous division-based force with many different types of unique forces, the modular forces were designed to be standardized and interoperable so forces could be more easily tailored to meet operational needs. Under the modular reorganization, National Guard and Army Reserve units are to have the same designs, organizational structures, and equipment as their active component counterparts so that they can be operationally employed in the same manner as active component units. The Army reported in its 2009 Campaign Plan that it has converted or begun converting 256 (84 percent) of the 303 planned brigade formations. However, the Army has been focused on equipping and staffing units to support ongoing operations in Iraq and Afghanistan and the equipment and personnel levels in nondeployed units have been declining. Further, as previously reported, the Army does not have a plan with clear milestones in place to guide efforts to equip and staff units that have been converted to the modular design, and the Army now anticipates that the converted modular units will not be fully staffed and equipped until 2019—more than a decade away. Furthermore, without adequate planning, the Army risks cost growth and further timeline slippage in its efforts to transform to a more modular and capable force. The Army is changing the missions of some Army organizations and retraining soldiers to produce more soldiers and units with high-demand skills. For example, the Army is decreasing its supply of air defense, armor, and field artillery capabilities in order to increase its supply of special operations, civil affairs, and military police capabilities. The Army began these rebalancing efforts in fiscal year 2003 after military operations in response to the September 11, 2001, terrorist attacks generated high demand for certain forces. Among those forces in high demand were certain combat support and combat service support forces such as military police and transportation units. These support forces, which are also called enablers, reside heavily in the reserve components. The goals of rebalancing included helping to ease stress on units and individuals with high-demand skills and meeting the Army’s goal of executing the first 30 days of an operation without augmentation from the reserve component. As part of the rebalancing plan, the Army National Guard is converting six brigade combat teams into four maneuver enhancement brigades and two battlefield surveillance brigades that will perform combat support roles. As of February 2009, the Army reported that it had completed rebalancing 70,400 positions, about 50 percent of the approximately 142,300 positions scheduled to be rebalanced by 2015 across the active and reserve components. The Army is also increasing personnel within the reserve components. In January 2007, the Secretary of Defense announced an initiative to expand the total Army by approximately 74,200 soldiers to better meet long-term operational requirements, sustain the all-volunteer force, and build towards a goal of 5 years between mobilizations for the reserve components. This initiative is expected to add 8,200 soldiers to the Army National Guard by 2010; 65,000 soldiers to the active component by fiscal year 2010; and 1,000 soldiers to the Army Reserve by 2013. The Secretary of Defense expects that with a larger force, individuals and units will, over time, deploy less frequently and have longer times at home between deployments. However, we have previously reported that the Army has not developed a comprehensive funding plan for the expansion initiative and that, lacking a complete and accurate plan, Congress and other decision makers may not have the information they need to consider the long-term costs and benefits associated with increasing Army personnel levels or gauge the amount of funding that should be appropriated to implement the initiative. The Army eliminated some reserve force-structure positions that previously had been intentionally unfilled, largely for budgetary reasons. Specifically, the Army’s force-structure rebalancing, which began in fiscal year 2003, and the modular transformation efforts that began in 2004 reduced the force structure allowances for the Army National Guard by 7 percent from 376,105 to 349,157 and Army Reserve by about 4 percent from 213,324 to 205,028 between 2005 and 2009. Concurrently, the Army’s Grow the Force plan increased the Army National Guard’s size by almost 2 percent from 352,700 soldiers in fiscal year 2007 to 358,200 by fiscal year 2010 and the Army Reserve’s size by 3 percent from 200,000 soldiers in fiscal year 2007 to 206,000 by 2013. When the reserve components were solely a strategic reserve, the Army routinely authorized units to be assigned fewer personnel than would be required for their wartime mission under the assumption that units could receive additional personnel when mobilized. By reducing the number of units, the Army was able to authorize the remaining units to be more fully manned. DOD established a policy in 2008 to promote and support the management of the reserve components as an operational force. The policy directed the services to align reserve component force structures, to the extent practicable, with established DOD goals for frequency and duration of utilization for units and individuals. In addition, the policy instructs the service Secretaries to manage their reserve components such that they provide operational capabilities while also maintaining strategic depth to meet U.S. military requirements across the full spectrum of conflict. Further, the policy directs the Secretaries to ensure sufficient depth of reserve component unit and individual capabilities to meet DOD’s established force-utilization goals. Those goals include planning for involuntary mobilizations of guard and reserve units such that they receive 5 years at home for every 1 year they are mobilized. The Army has adapted the strategies that it uses to staff its reserve components for the operational role, which requires Army reserve component units to achieve higher levels of personnel readiness and maintain a more stable cadre of personnel than they did as part of a strategic reserve. The Army has increased the number of personnel in reserve component units, given units higher priority for personnel as they near availability for deployment in the rotational cycle, established some personnel readiness goals, and modified its recruiting and retention strategies. The operational role has several implications for how the Army staffs its reserve component units. First, as an operational force, Army reserve component units are now expected to be available to deploy for 1 year with 5 years between deployments and more frequently when the Army faces increased demand for forces by the combatant commanders. To prepare for regular deployments, the Army now expects its reserve component units to progressively increase their personnel readiness on a cyclical basis as they near availability for deployment. The Army determines a unit’s personnel readiness level by comparing the unit’s level of available strength to the number of personnel required by the unit. Available strength is the portion of the unit’s assigned strength that is available for deployment to accomplish the unit’s assigned wartime mission. To be available, these personnel must meet a number of administrative, medical, and dental requirements and must meet their individual qualifications. As an operational force, reserve component units need to make efficient use of training time before deployment and build a cohesive force needed to achieve collective training proficiency. DOD’s policy that the service Secretaries program and execute resources as required to support a “train-mobilize-deploy” model means that units need to achieve high levels of personnel readiness and complete most of their training requirements prior to mobilization. This approach to training and mobilization contrasts with the strategic reserve’s “mobilize-train- deploy” approach in which units would be brought up to full personnel strength and soldiers’ medical and dental issues would be addressed after mobilization. To implement the train-mobilize-deploy model, the Army has found that it needs to stabilize unit personnel by the time the unit is alerted for deployment or as early as possible in the force-generation cycle so that the unit can attain as much collective training proficiency as possible prior to mobilization. This approach allows the unit to minimize postmobilization training time and provide as much availability as possible to theater commanders. To staff reserve component units more fully, the Army has increased the percentage of required personnel that are assigned to reserve component units above strategic reserve levels and has established a long-range goal of achieving full personnel strength throughout the force-generation cycle for reserve components. As discussed previously, the Army decreased the size of its reserve components’ force structures while also increasing their end strength, which allowed remaining units to be more fully manned. Also, the Army has modified its approach to assigning personnel to reserve component units by giving units nearing deployment priority over other units in the assignment of soldiers and establishing some personnel readiness requirements for deploying units. Despite these changes, the Army has not adopted any overarching, uniform personnel readiness levels that units must achieve as they progress through each phase of the force-generation cycle. The Army has established some interim personnel readiness goals for units participating in a “RESET pilot” program. However, the Army reported in its 2009 Campaign Plan that current high global demands for Army forces are preventing units from achieving specific readiness levels as they progress through the phases of the force-generation cycle. The Army plans to evaluate units in the pilot program through 2010 and use this information to identify lessons learned and determine what levels of personnel readiness will be required of reserve component units as they progress through the force-generation cycle. The reserve components have established several new initiatives to meet the recruiting and retention goals of an operational force. Both components have established incentives for current soldiers to recruit others. The Army National Guard established the Guard Recruiting Assistance Program in which every Army National Guard member can function as a recruiter. The program provides a $2,000 monetary incentive to Guard soldiers for every new person they recruit who begins basic combat training. The Army Reserve’s Recruiting Assistance Program also provides a $2,000 monetary incentive to soldiers for every new person they recruit. Both components are also implementing targeted bonus programs to increase retention for soldiers with high-demand occupational specialties and for certain officer grades. Other Army National Guard recruitment and retention efforts include the Recruit Sustainment Program, which is designed to keep recruits interested in the Army National Guard as well as increase their preparedness while awaiting training, and the Active First Program, which is a pilot initiative in which soldiers are recruited to serve for an 8-year period which includes serving 3 years in the active component and 5 years in the Army National Guard. Additional Army Reserve recruitment and retention initiatives include a conditional release policy designed to control the number of reservists who leave the Army Reserve to enter the active Army, Army National Guard, or other service components; an education stabilization program which synchronizes new soldiers with a unit in the appropriate phase of the force-generation cycle so that the soldier can complete his/her college degree without the interruption of mobilization; and an employer partnership initiative in which soldiers are recruited to train and serve in the Army Reserve for a particular occupational specialty and work in a related occupation for one of the civilian employers that participate in this initiative. Further, the Army and its reserve components have begun several other initiatives to improve personnel readiness and unit stability prior to mobilization and improve the execution of the “train-mobilize-deploy” model required by DOD for an operational force. Although these initiatives are in various stages of implementation, and it is too early to assess their effectiveness, some of the steps that the Army and its reserve components have taken include the following: The Army has established a goal of issuing alert orders to reserve component units at least 12 months prior to their mobilization in order to provide them enough time to attain required levels of ready personnel for deployment. Army data show that the Army has increased the amount of notice it provides to mobilizing Army National Guard units from an average of 113 days in 2005 to 236 in 2008. The Army Reserve began implementing the Army Selected Reserves Dental Readiness System in 2008 to reduce the number of nondeployable soldiers across the force by providing annual dental examinations and dental treatment for all soldiers regardless of their alert or mobilization status. To reduce personnel attrition and increase unit stability prior to unit mobilizations without the use of stop-loss, the Army National Guard’s Deployment Extension Stabilization Pay program, when implemented, will make some soldiers eligible to receive up to $6,000 if they remain with their unit through mobilization and 90 days following demobilization. The initiative is scheduled to begin in September 2009. The Army Reserve is considering a similar program. To improve medical readiness across the reserve components, the Army National Guard is pilot testing an initiative—the Select Medical Pre- Deployment Treatment Program—that will provide limited medical treatment at no cost to eligible medically nondeployable soldiers in Army National Guard and Army Reserve units alerted for deployment. If the Army determines that the pilot is successful, it will consider expanding the program across the reserve components. Although the shift to the “train-mobilize-deploy” model increases the importance of the premobilization readiness tasks performed by full-time support staff, the Army has not modified its full-time support staffing requirements to reflect the needs of the operational role, and the reserve component units face difficulties in performing key readiness tasks at current staff levels. As of May 2009, the Army had not reevaluated the reserve components’ requirement for the full-time staff that are needed to perform key readiness tasks on a day-to-day basis in light of their new operational role. With most members of the Army National Guard and Army Reserve serving 2 days a month and 2 weeks out of the year, the reserve components rely on a small number of full-time personnel to perform the day-to-day tasks such as maintaining unit equipment and planning training events that reserve units need to accomplish in order to maintain readiness for their mission and be able to deploy. The Army Reserve Forces Policy Committee, U.S. Army Forces Command, and the Commission on National Guard and Reserves have reported that insufficient full-time support levels place the operational force at risk. The Army’s reserve components are not authorized the number of full-time personnel needed to meet the requirements established for their strategic role, and requirements for the operational role have not been determined. For fiscal year 2010, the Army National Guard and Army Reserve required about 119,000 full-time support positions but were only authorized 87,000 positions, or about 73 percent of the requirement. The current full-time support requirement is based on a manpower study conducted in 1999 when the reserve components were still primarily a strategic reserve. In subsequent years, the Army reviewed and adjusted the manpower analysis but it did not conduct an analysis that incorporated the needs of an operational reserve. The last review performed was completed in 2006, prior to the issuance of the Secretary of Defense policy that limited involuntary mobilizations to 1 year and before an increased emphasis was placed on premobilization readiness. In 2007, the Army directed a study designed, in part, to measure the readiness benefit to the Army of increasing its reserve components’ full-time support. However, because of data limitations, the Army could not quantify the effect of full-time support on unit readiness. As a result, the Army initiated an additional study to determine the link between full-time support levels and unit readiness before including additional funding for full-time support in future budget requests. Specifically, the Army has commissioned a study to assist it with identifying the existing requirements for full-time support, determining how the Army National Guard and Army Reserve have met these requirements in the past, and developing analytical links between full-time support and unit readiness. The Army does not plan to make any decision on full-time support resource levels until after this study is completed in September 2009. Mobilization of certain full-time support staff with dual roles as full-time support staff and deployable members of reserve units who perform key logistics and maintenance tasks has also created maintenance and readiness challenges for the Army’s reserve components. In the National Guard and Reserve Equipment Report for 2009, DOD reported that the average staffing of Army Reserve maintenance activities is at approximately 60 percent of requirements, and currently about 25 percent of the assigned staff is deployed. According to the report, mobilization of Army National Guard full-time support staff has resulted in an overall reduction of 71 percent of maintenance technician staffing during mobilization. The Army National Guard often hires temporary technicians to replace maintenance technicians who are mobilized. However, state National Guards, on average, hire only one temporary technician for every five maintenance technicians mobilized, due to the cost involved. To mitigate the maintenance backlog, the Army Reserve continues to use contractors, contracted maintenance support, and commercially available services. The Army has adapted its strategy for equipping its reserve components for the operational role by establishing a long-term equipping goal and, until it reaches this goal, giving units priority for equipment as they near their availability for deployment. Over the long term, the Army has established a goal of equipping all reserve units with 100 percent of their requirements by the end of fiscal year 2019. However, because the Army’s need for equipment currently exceeds the available supply, and equipment shortages are expected to continue for a number of years, the Army prioritizes the distribution of equipment to units that are deployed and preparing to deploy consistent with its force-generation model. In addition, under the new “train-mobilize-deploy” model, reserve component units are also expected to complete most of their training requirements prior to mobilization so that they can provide as much time as possible to theater commanders within the 12-month limit on involuntary mobilizations. To accomplish these goals, the Army has established interim policies and guidance for equipping reserve component units. First, the Army intends for a unit to have 80 percent of its required equipment 365 days after the unit returns from deployment. Second, the Army has directed commanders to ensure that units report to the mobilization station with 90 to 100 percent of their required equipment. The Army faces challenges in limiting the frequency of mobilizations and increasing both personnel and unit readiness given the high pace of current operations. Despite changes to its force structure, manning, and equipping strategies, at the current pace of operations, the Army’s reserve component force structure does not allow the Army to reach the Secretary of Defense’s goal of providing reservists 5 years demobilized for each year mobilized. As figure 2 shows, the Army’s reserve components have experienced a continued high level of mobilizations since 2001 in support of Operations Noble Eagle, Enduring Freedom, and Iraqi Freedom. As of June 2009, more than 110,000 Army National Guard and Army Reserve soldiers were mobilized in support of these operations. Due to this high demand for forces, the Army has only been able to provide its reserve component soldiers with less than 4 years at home between mobilizations on average. For example, many capabilities such as civil affairs, psychological operations, military police, transportation, and adjutant general companies and detachments are in high demand, so units with these skills are being mobilized much more frequently, sometimes with less than 3 years between deployments. Although unit mobilization frequency differs on a case-by-case basis, nearly all types of units are being mobilized more frequently than the Secretary’s goal of no more than 1 year mobilized every 5 years. For reserve component forces to be provided 5 years at home between mobilizations given the current force structure, the total number of Army reserve component soldiers mobilized would have to decline by about 54 percent of the soldiers mobilized as of June 2009 to approximately 51,000 soldiers. As figure 3 below shows, the number of reserve component soldiers that could be available for deployment decreases as the required average amount of time between mobilizations increases. The Army’s current plans for its reserve component force structure would provide soldiers about 4 years at home between mobilizations, which is more than the current pace allows but less than the 5 year goal. According to Army officials, the current high pace is not expected to be permanent and the Army must balance mobilization frequency goals with the need to meet current operational demands, maintain capabilities to perform the full range of missions expected under the National Military Strategy, and remain within the constraints of mobilization policies and force-size limitations, as well as expected future budgets. The Army currently projects that the high pace of operations will continue through fiscal ye 2013, but it does not project when the Army will be able to achieve the Secretary’s goal of 5 years between deployments. As a result, the Army accepted the risk more frequent reserve mobilizations may pose to its personnel recruitment and retention in order to be better positioned to achieve its other goals. Although officials report that the Army reserve component units are meeting the Army’s required levels of ready personnel by the time that they deploy, the reserve component units continue to have difficul achieving goals for personnel readiness and unit stability prior to mobilization. As a result, the Army has had to continue to take steps to build readiness after mobilization. However, the Army has found that addressing issues such as medical and dental problems after mobilization may disrupt predeployment training and reduce the amount of time unit are able to be provided to theater commanders under current limits on involuntary mobilizations. The Army has begun to implement additional ness and unit stability but it is too initiatives to improve personnel readi early to evaluate their effectiveness. Reserve component units continue to have difficulty in achieving personnel readiness and unit stability goals before they are mobilized because of the number of soldiers who do not meet medical, dental individual training qualification requirements as well as personnel attrition. A 2008 Army study of the pre- and postmobilization preparation of five Army National Guard brigade combat teams that mobilized betw October 2007 and January 2008 found that none of the five units met deployment standards for the levels of personnel with individua qualifications and medical readiness when they arrived at their mobilization stations. The study also found that these units had experienced significant attrition, with an average of 59 soldiers leaving their units per month between the time they were alerted for mobi and 90 days before mobilization when th e Army’s stop-loss policy prevented them from leaving the Army. lization As a result of the challenges faced in achieving desired personnel readiness levels, the Army and its reserve components have had to continue taking steps to improve individual and unit readiness late in the force-generation cycle and after mobilization. Such steps include addressing medical and dental issues and transferring personnel from nondeployed to deploying units to fill shortages. For example, according to Army mobilization officials, one unit that mobilized in September 2008 required the transfer of more than 900 soldiers, or 22 percent of the 4,122 required personnel, from other units within 2 weeks of its mobilization date in order to fill shortages and man the unit to a deployable level. Further, our surveys of and interviews with 24 recently deployed reserve component units found that nearly all of those units had to receive personnel transfers from outside their units to achieve the required personnel levels for deployment. According to Army officials, such transfers disrupt unit stability and cause personnel turbulence at a time when the units are working to attain collective training proficiency in preparation for deployment. Additionally, Army officials stated that personnel transfers disrupt premobilization training plans when they occur within the last 6 months prior to a unit’s mobilization date because more training has to be done after mobilization, which reduces operational availability to theater commanders. For these reasons, one of the chief lessons learned reported in a 2008 Army study of pre- and postmobilization is that early assignment of personnel and stabilization of deploying units is necessary to make efficient use of training time and build a cohesive force so that the units can efficiently achieve required levels of collective training proficiency and provide as much operational availability as possible to theater commanders. Although the Army has taken steps in recent years to improve reserve component equipment inventories, it faces challenges in equipping units for training while supporting current high operational demands and, over the long term, may face challenges in meeting its equipment goals amid competing demands for resources. From 2003 to 2010, the Army requested $22.7 billion in its annual appropriations to equip the Army National Guard and Army Reserve. Despite this effort, the Army National Guard reported in October 2008 that it had 76 percent of its required equipment with only 63 percent of the required items located within the United States and available for training use. Similarly, the Army Reserve reported that it had 74 percent of its required equipment with only 67 percent of the required items located within the United States. The Army is finding it difficult to provide units access to the same equipment for training that they will use overseas so they can attain training proficiency before they deploy. The demand for some items, such as mine resistant ambush protected vehicles and night vision equipment, has increased across the Army as operations have continued, and equipment requirements to support ongoing operations continue to evolve. As previously reported, these evolving requirements have made it difficult for the Army to communicate to deploying units what equipment will be needed in-theater and has challenged the reserve components to identify and transfer the right items. Moreover, the Army has directed reserve component units returning from overseas deployments to leave in-theater certain essential equipment items that are in short supply for use by follow-on forces. While this equipping approach has helped meet operational needs, it continues the cycle of reducing the pool of equipment available to nondeployed forces for unplanned contingencies and for training. We have previously reported that the continuing strategy of transferring equipment to deploying forces hampers the ability of nondeployed forces to train for future missions. Furthermore, the transformation to the modular structure has also placed demands on the Army’s equipment inventories because it requires modular units to have modern equipment as well as increased quantities of some items. Similarly, the initiative to expand the Army, which added six brigade combat teams and additional modular support units to the overall Army force structure, required equipment and placed additional demands on the Army’s inventories. A 2008 Army study of lessons learned from the deployment of five Army National Guard Brigade Combat teams found that equipment shortages adversely affected the deployment training of these units and increased the amount of time required to obtain collective training proficiency. This study noted that training on the equipment a unit will use in-theater is essential to ensure tasks, conditions, and standards are met during premobilization training. However, the Army has not been able to provide some equipment to units to accomplish their training either prior to mobilization or deployment. During our interviews with reserve component units that had returned from deployment within the past year, we found several instances where units did not train with the same equipment before they deployed that they used in theater. As a result, they had to accomplish this training in-theater, effectively reducing their operational availability to theater commanders. For example: A National Guard transportation company did not have the opportunity to train before mobilization with the armored trucks they drove in-theater. According to unit officials, these models maneuver differently and drivers need to practice driving the armored version. To accomplish this training, soldiers trained with armored versions upon arrival in-theater. A National Guard engineering battalion told us they did not have access to the heavy equipment transporter or cranes used in-theater when it was training at the mobilization station. Instead, soldiers trained with similar equipment before they deployed and then trained on some of the equipment upon arrival in-theater. National Guard officials from an aviation battalion told us that they did not have an opportunity to train on some equipment they used in-theater, including global positioning systems, communications systems, and intelligence systems. Instead, they trained on the equipment with the unit they were relieving after they arrived in-theater. An Army Reserve transportation company had to wait until it was in- theater to train on a pallet loading system. Over the long term, the Army faces challenges in meeting its equipping goals amid competing demands for resources. The National Guard and Reserve Equipment Report for Fiscal Year 2009 included estimates of the resources required for the Army National Guard to achieve the 100 percent equipping goal by 2019. The report estimated that the Army National Guard will require an additional $6 billion each year from 2014 to 2019 to achieve the 100 percent goal, not including the $36.8 billion included in the Future Years Defense Program from 2005 to 2013 to purchase equipment. In addition, this report estimated that the Army Reserve will need $1.6 billion each year over its 2009 to 2015 projected spending plan to reach its equipping and modernization goals. Despite the magnitude of the Army’s projected investment in its reserve components, until operational demand eases, it seems unlikely that the Army will be able to achieve DOD’s goal of a sustainable mobilization cycle for its reserve forces or fully implement the train-mobilize-deploy model. It is also not clear how long reserve component forces can sustain the current high pace of operations without difficulties in recruiting and retaining reserve component soldiers or compromising the viability of the all-volunteer citizen soldier reserve components, which are an important national resource critical for both domestic and overseas missions. The Army has estimated and budgeted for some costs that relate to the transition of its reserve components to an operational force, but the full cost of the transition remains uncertain and could vary widely from the initial estimates depending on Army decisions. The Army has decided to include the majority of funding needed for this effort in its fiscal year 2012 to 2017 projected spending plans after costs are clarified by ongoing studies. However, the Army has not yet completed an implementation plan and funding strategy that fully describe the key tasks necessary for the transition, establish timelines for implementation, and identify metrics to measure progress. The Army has developed and updated a preliminary estimate of the costs that are not already included in its budget and Future Years Defense Program for the operational transition, but actual costs could vary widely from the estimates depending on Army decisions, such as which cost categories are essential for an operational reserve and the level of resources that will be required. In response to initiatives established by the Chief of Staff of the Army in April 2007, the Army formed a working group to develop a concept plan to complete six critical transition tasks. These tasks include (1) adapting pre- and postmobilization training; (2) adapting forces that perform key functions such as training, equipping, construction, and maintenance; (3) providing Army incentives to retain citizen soldiers and support their families; (4) modifying reserve component premobilization equipping strategies; (5) updating human resource management processes; and (6) revising statutes, policies, and processes. As a part of this effort, the Army developed a preliminary cost estimate for those transition tasks that were not already included in the Army’s budget or program. The intent of the preliminary cost estimate was to determine the magnitude of the additional costs required to complete the transition in order to assess the feasibility of the effort and provide estimates that Army leadership could use in developing its projected spending plans for fiscal years 2010-2015. The working group estimated an incremental cost of about $28 billion for fiscal years 2010-2015 for the transition. However, the Army continued to examine the estimates for pre- and postmobilization validation, training support, and installation support. As a result of ongoing studies, the Army decided to report a cost range of between $24.4 billion and $28.1 billion depending on implementation decisions. Of that total, the primary cost driver was for increasing full-time support, estimated at $12.8 billion over the period. In 2009, the Army revised its estimates to incorporate updated assumptions for some cost categories. Specifically, the estimates increased costs for medical readiness to reflect expanding medical treatment to reservists throughout the phases of the force-generation cycle; decreased costs for full-time support, which, according to Army officials, will provide 80 percent of the strategic reserve requirement rather than 100 percent of the strategic reserve requirement; increased costs for the Army Reserve homeland defense pilot program to include the cost of incentives for high-priority units; and increased premobilization training costs to incorporate updated cost factors for items such as participation rates, pay and allowances, and inflation. At the time of this report, the Army had not completed updates for other cost categories such as recruiting and retention, information technology, predeployment training equipment, new equipment training, second- destination transportation, premobilization training, and community services. The most recent Army estimates show a cost range from $12.7 billion to $27 billion over a 6-year period. Table 2 shows the cost categories and the amounts the Army estimated in 2008, categories updated in 2009, and a summary incorporating the most recent Army estimates. According to Army officials involved in cost estimating, the transition costs could vary widely from the initial estimates for four key reasons. First, the Army has not yet defined which cost categories are essential for an operational reserve component, so costs could be added or removed from the estimate. For example, the Army has not decided whether activities recommended by the Commission on National Guard and Reserves, such as providing housing allowance for activated reservists and reimbursing certain reservists for travel, are essential for an operational reserve and should be included as transition costs. Estimated costs for implementing these recommendations were not included in the preliminary estimate or the 2009 updates and, if included, could significantly increase costs. The Army has estimated that providing housing allowance for activated reservists could add from $170 million to $400 million annually and reimbursing travel expenses for certain reservists participating in individual training would add about $580 million annually. The Army has not estimated costs to implement other commission recommendations, such as the costs to increase the capacity of training institutions and increase staff support to the Employer Support of Guard and Reserves program. Second, the Army has not decided on the level of resources that will be required in other cost categories. For example, the Army has not established the specific personnel, training, and equipment levels its reserve components will require in each phase of the force-generation cycle. Third, several studies are underway to examine the level of resources required for full-time support, medical and dental benefits, and incentives changes for the operational role. If readiness requirements, full-time support, medical and dental benefits, or incentives are increased above current levels, costs for the transition to the operational role could increase. Finally, neither estimate includes any recurring or sustainment costs beyond 6 years; costs for incentives, policy, or legislative changes required for the operational role; or costs for implementing the human resource initiatives designed to increase flexibility for reservists transitioning to and from active duty—referred to as the “continuum of service initiatives”—that the Army has identified as critical to the transition. Moreover, costs that the Army considered part of other Army initiatives, such as increasing reserve component equipping levels or expanding the Army, were not included. According to Army officials, The Fiscal Year 2010 President’s Budget Request includes some funding that supports the reserves’ operational role, but the Army plans to include the majority of funding for transition costs in its fiscal year 2012-2017 projected spending plans after it obtains more information on the resources needed to support the operational role. Army officials identified $2.2 billion in The Fiscal Year 2010 President’s Budget Request that the Army considers as supporting the transition to the reserves’ operational role. Specifically, the fiscal year 2010 budget includes $123 million for community services (family support); $34 million for dental care to facilitate timely mobilization; $176 million for information technology, secure internet, and bandwidth; and $1.9 billion for reserve component recruiting and retention. In addition, Army officials stated that $779 million of the funds requested in DOD’s fiscal year 2009 supplemental request for overseas contingency operations will also contribute to the transition to an operational force. For example, Army officials identified funding requested for items such as installing secure internet capability to reserve component units, temporary full-time support staff, additional training days, and other costs as contributing to the transition. However, it is not clear from Army documents how much of the transition costs identified in the preliminary cost estimates are included in the fiscal year 2009 supplemental or 2010 budget request. Although, in an information paper provided to Congress in February 2008, the Army stated that its fiscal years 2010 to 2015 projected spending plans would capture the required capabilities to begin the formal transformation of the reserve components to an operational force, the Army has decided to defer including the majority of those resources until the fiscal years 2012 to 2017 projected spending plans. According to Army officials involved in estimating transition costs, the Army needed more information on the resources required for the reserve components to meet operational readiness requirements, such as levels of medical support and full-time support. Army officials noted that accurately estimating costs for the transition is challenging because specific information about the levels of personnel, equipment, training, and full-time support required of an operational reserve component in each phase of the Army’s force- generation cycle has not been developed. Army officials have stated that more specific metrics, such as the level of training proficiency required in each phase of the cycle, would help them to develop a more refined cost estimate for the transition. In February 2008, the Army formed a temporary task force to develop a comprehensive, coordinated implementation plan to transition the Army’s reserve components from a strategic reserve to an operational force. At the time of this report, the task force had developed a draft implementation plan that identifies some of the key tasks, an approximate 10-year timeline to complete transition tasks and incorporate associated costs into the base budget, and some measures of success. According to Army officials, the Army is awaiting agreement on this plan, as well as the results of several ongoing studies, before it incorporates any additional transition costs into the fiscal year 2012 budget and program. In the meantime, the Army continues to utilize its reserve components as an operational force without a complete and approved implementation plan that clearly defines what tasks and costs are essential to the transition or a comprehensive funding strategy that identifies a timeline and funding sources for key transition tasks. According to DOD’s directive that governs managing the reserve components as an operational force, it is DOD policy that the reserve components shall be resourced to meet readiness requirements of federal missions and that resourcing plans shall ensure visibility to track resources from budget formulation, appropriation, and allocation through execution. Additionally, best practices for strategic planning have shown that effective and efficient operations require detailed plans outlining major implementation tasks, defined metrics and timelines to measure progress, a comprehensive and realistic funding strategy, and communication of key information to decision makers. However, at the time of this report, the task force had not yet identified specifics for key tasks such as adapting the training base and institutional support functions, identifying measures of success for all transition tasks—such as synchronizing training cycles, sustaining volunteerism, and implementing human resource initiatives—and developing a resourcing strategy. In addition, the draft implementation plan does not explain how other Army initiatives, such as increasing the Army’s end strength or transforming to the modular force contributes to the overall goal of transitioning the reserve components to an operational force. According to Army officials, the task force is scheduled to disband in September 2009, and it is not clear who will have responsibility for managing the implementation of the transition to the operational role and tracking progress over the long term. Without an approved implementation plan that fully describes the key tasks necessary for the transition, establishes timelines for implementation, and identifies metrics to measure progress, it will be difficult for the Army to gauge whether it is moving toward its goal of fully supporting the transition of the Army National Guard and Army Reserve to operational roles. Furthermore, Congress will continue to have only a partial view of the potentially substantial cost and time required to complete the transition. The deployment of National Guard units as a federal operational force has reduced their availability for domestic missions, but the effect on the states remains unclear because states have mitigated shortfalls through mutual support agreements and requirements for some domestic missions, such as responding to large multistate events, remain undefined. In general, National Guard members may only serve in one duty status at a time. Unless they are activated under Title 10, Guard members remain under command and control of the state governors in either state active duty or Title 32 status. When National Guard members are activated for federally controlled Title 10 duty, their Title 32 status generally stops and then begins again when they are released from Title 10 active duty. Under the Army’s force-generation model as designed, there is the potential for units to be unavailable to state governors for 1 year with 5 years between federal mobilizations. However, according to Army and state National Guard officials, the reality of the current operational environment is that National Guard units deploy more frequently and are unavailable to state governors for about 1 year every 3 years. For example, Washington’s brigade combat team deployed in 2008 after 3-1/2 years at home. The effect of the operational role on the National Guard’s domestic readiness remains unclear because states have taken steps to mitigate any known shortfalls and, as we have previously reported, DOD, the Department of Homeland Security, and the states have not defined requirements, readiness standards, and measures for the National Guard’s domestic missions that are likely to be conducted in Title 32 status. Since National Guard units have begun deploying for their federal missions, states have made plans to compensate for any shortfalls in availability of their Guard forces either by relying on other capabilities and resources within the state or by relying on assistance from other states obtained through mutual support arrangements. National Guard officials from all of the four states that we visited reported that they routinely coordinate with other states and utilize mutual assistance agreements to ensure they can respond effectively to domestic requirements when state forces are deployed. For example, officials in Florida voiced a particular concern because a brigade combat team of more than 3,400 people would be deployed during the 2010 hurricane season. However, they noted that they routinely coordinate with other southeastern states to ensure that forces and capabilities that could be needed to respond to hurricanes are available within the region, and they anticipated being able to respond effectively. In addition, according to Washington National Guard officials, while they have typically been able to assign domestic response missions to units that are outside their deployment window, this becomes increasingly difficult when a large percentage of the state’s forces are mobilized. At the time of our visit, the state had almost 50 percent of its forces mobilized. Similarly, Guard officials in Virginia told us that its brigade combat team, comprising about 54 percent of the state’s National Guard forces, will be deployed at the same time as the state’s aviation battalion resulting in a large loss of forces and essential capabilities for domestic response missions. To mitigate for this loss, Virginia National Guard officials stated they rely on mutual support arrangements with other states and cross training of the state’s soldiers. In addition, state National Guard officials told us that they would have to rely on other states to provide support in the event of a catastrophic disaster regardless of the number of soldiers the state had mobilized for federal missions. The Army’s reserve components are likely to be used as an operational force supporting regular overseas rotations for the foreseeable future, and several studies and commissions have determined there is no viable alternative to the Army’s continued reliance on reservists. Although the Army has taken steps to modify its force structure and adapted its personnel and equipping strategies for the operational role, heavy operational demands have hampered the Army’s efforts to implement the force-generation model as intended. For example, the Army has not established firm readiness requirements for an operational reserve component or fully incorporated the resources needed to support the operational role into its budget and projected spending plan. Although the Army continues to study key costs, incorporating the necessary resources into its budget and projected spending plan is needed to effectively implement the force-generation model and support the reserve components in their new role. Adapting the Army’s institutions and incorporating the resources needed to support the cyclical readiness of an operational reserve component into its base budget will be a long-term effort estimated to take more than 10 years to complete. The implementation of these changes will span multiple administrations and Congresses and require many billions of dollars and, therefore, needs sound management controls to guide the effort and ensure success. The Army currently plans to request the majority of funding to complete the transition to an operational force in its fiscal year 2012-2017 budget; however, it has not finalized a cost estimate or detailed implementation plan that identifies what specific requirements have been and remain to be filled. The lack of outcome-related metrics also hampers the Army’s ability to measure its progress towards fully operationalizing its reserve components and justifying the large expenditure of funds required to implement the transition. Until the Army adopts an implementation plan outlining its requirements for transitioning its reserve components to an operational force, identifying progress made to date, and detailing additional personnel and other resources required, DOD decision makers and Congress will not be in a sound position to determine the total costs to complete the transition and decide how to best allocate future funding. Moreover, without effective management controls over these initiatives to help measure progress and to accomplish effective and efficient operations, the Army risks continued challenges in preparing ready units and providing reservists a sustainable balance between military and civilian careers, which, over time, could threaten the viability of the all- volunteer citizen soldier force. We recommend that the Secretary of Defense direct the Secretary of the Army to take the following three actions: Finalize an implementation plan for transitioning its reserve components to the operational role that describes the key tasks necessary for the transition, assigns responsibility for these tasks, defines metrics for measuring success, and establishes timelines for full implementation. Complete a cost estimate for the transition that, at a minimum, should a clear definition of what costs the Army does and does not consider to be related to the transition to an operational force; estimates for key cost drivers; and identification of any uncertainties in the estimates due to pending changes to the reserve components’ force structure, personnel, training, and equipping strategies or other decisions that may affect costs, and updates to the plan as these decisions are made. Include the costs of the transition in the Army’s budget and Future Years Defense Program. The Assistant Secretary of Defense for Reserve Affairs provided written comments on a draft of this report. The department agreed with each of our recommendations. DOD’s comments are reprinted in their entirety in appendix II. DOD agreed with our recommendation that the Secretary of Defense direct the Secretary of the Army to finalize an implementation plan for transitioning its reserve components to the operational role. In its comments, it cited DOD Directive 1200.17 that directs the Secretaries of the military departments to manage their respective reserve components as an operational force such that they provide operational capabilities while maintaining strategic depth. However, this directive does not provide detailed direction on how the services should transition the reserve forces, and we believe that a detailed plan is necessary to ensure key tasks in managing the reserves as an operational force are completed. DOD also drew a distinction between managing the reserve components as an operational force and transitioning reserves to an operational force. In this report, we defined transitioning reserves to an operational force as implementing those steps necessary to adapt the Army’s institutions and resources to support the cyclical readiness requirements and implement the “train-mobilize-deploy” model. We believe that completing a detailed implementation plan that describes key tasks necessary for the transition, assigns responsibility for these tasks, defines metrics for measuring success, and establishes time lines for full implementation is an essential part of transitioning the reserve components to an operational force. DOD agreed with our recommendation that the Secretary of Defense direct the Secretary of the Army to complete a cost estimate for the transition that includes a definition of costs, estimates for key cost drivers, and areas of uncertainties, such as pending policy decisions, that may affect costs. However, the department did not describe the steps it will take to complete the estimate. We therefore believe the Secretary of Defense should provide specific direction and guidance as outlined in our recommendation. DOD agreed with our recommendation that the Secretary of Defense direct the Secretary of the Army to include the costs of the transition in the Army’s budget and Future Years Defense Program. In its comments, DOD noted its published guidance, Directive 1200.17, that resourcing plans should ensure visibility to track resources from formulation, appropriation, and allocation through execution. However, as discussed in the report, the Army does not plan to include the majority of the estimated costs for transitioning its reserve components to an operational role until fiscal year 2012. Until the Army includes the resources required in its future spending plans it will be hampered in its ability to transition its reserve components to the operational role. We are sending copies of this report to other appropriate congressional committees and the Secretary of Defense. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. Should you or your staff have any questions concerning this report, please contact me at (202) 512-3489 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. To conduct our work for this engagement, we analyzed data, reviewed documentation, and interviewed officials from the Office of the Under Secretary of Defense for Personnel and Readiness, the Office of the Under Secretary of Defense Comptroller, the Office of the Assistant Secretary of Defense for Reserve Affairs, the Office of the Assistant Secretary of the Army for Manpower and Reserve Affairs, Headquarters Department of the Army, U.S. Army Forces Command, First Army, the National Guard Bureau, the Army National Guard, the Office of the Chief of the Army Reserve, the U.S. Army Reserve Command, RAND Corporation, and the Institute for Defense Analysis. We also reviewed documentation and interviewed officials from offices of National Guard Adjutants General in four case-study states: Florida, Missouri, Virginia, and Washington. These states were selected because they had a history of major disaster declarations; are geographically dispersed across the United States; have a brigade combat team presence or a Chemical, Biological, Radiological, Nuclear, and high-yield Explosive (CBRNE) Enhanced Response Force Package (CERFP) team (which are units that are dual-tasked with domestic responsibilities) or both; face a range of homeland security risks; and present a range of population sizes. To identify the extent to which the Army has made progress but faces challenges in modifying the force structure, manning, and equipping strategies of its reserve components to meet the requirements of the operational role, we reviewed prior GAO work, reports of the Commission on the National Guard and Reserves, reports to Congress on related initiatives and issues, current Army plans and policy documents, including the Army Campaign Plan, Army Structure Memorandums, Army Forces Command’s concept plan for Army Initiative 4 (transition the reserve components to an operational force), Army Forces Command’s 4 + 1 Army National Guard Brigade Combat Team Comprehensive Review, the National Guard and Reserve Equipment Report, DOD Directive 1200.17, Managing the Reserve Components as an Operational Force, and Headquarters Department of the Army Execution Order 150-18 Reserve Component Deployment Expeditionary Force Pre- and Post-Mobilization Training Strategy. We also reviewed Army data on actual and planned modular unit restructuring, total force structure changes, and the expected number of reserve component soldiers available each year at varying mobilization rates under the currently planned rotational force structures in order to assess changes made to the reserve components’ force structure for the operational role. In addition, we reviewed Army National Guard and Army Reserve force-structure allowances, personnel end strength, and equipment on hand to assess the extent to which the Army and reserve components have made changes to more fully man and equip units for the operational role. Further, we incorporated information from surveys of a nonprobability sample of 24 Army National Guard or Army Reserve units, as well as follow-up interviews with officials from 15 of these units. We selected units of different types and sizes that had returned from deployments in the last 12 months. In addition, we chose the proportion of Army National Guard and Reserve units for our sample based on the proportion of mobilized forces from each of the components. The surveys and interviews addressed a range of training, equipment, and personnel issues. We supplemented this information by reviewing documents and interviewing officials from DOD, Army, National Guard Bureau, Army National Guard, Army Reserve, U.S. Army Forces Command, and First Army to discuss planned and ongoing policy and strategy changes for transitioning the reserve components to an operational force. Further, we incorporated information from interviews with officials from offices of National Guard Adjutants General in case- study states. To determine the extent to which the Army has estimated costs for the transition of the reserve components to an operational force and included them in its current budget and Future Years Defense Program, we reviewed DOD’s fiscal year 2009 supplemental appropriations request and DOD’s fiscal year 2009 and 2010 budget requests. We also examined the Army’s cost estimates for operationalizing the reserve components, including Army Forces Command’s concept plan for Army Initiative 4 (AI4)—transitioning the reserve components to an operational force—and a Center for Army Analysis cost-benefit analysis of the AI4 concept plan. In addition, we interviewed officials from DOD, the Army, Army Forces Command, the National Guard Bureau, the Army National Guard, and the Army Reserve in order to understand assumptions made in estimating the cost for transforming the reserve components to an operational force, to assess the extent to which those costs have been included in DOD’s budget and Future Years Defense Program, and to identify whether the Army has an implementation plan that includes the full cost of the transition. To determine the effect of the National Guard’s federal operational role on its availability to state governors for domestic missions, we reviewed relevant sections of Titles 10 and 32 of the U.S. Code, and DOD directives regarding management of the reserve components as an operational force and National Guard homeland defense activities. We also conducted interviews with the National Guard Bureau and offices of National Guard Adjutants General in the four states chosen for our case study concerning the possibility of conflicts between the states’ National Guard requirements and Title 32 requirements related to the operational role of the National Guard. Further, our review of prior GAO work, along with the interviews with officials from the National Guard Bureau and case-study states, allowed us to assess whether the requirements of the National Guard’s operational role may affect the availability or readiness of National Guard forces for domestic missions. We conducted this performance audit from July 2008 through July 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact above, Margaret Morgan, Assistant Director; Melissa Blanco; Susan Ditto; Nicole Harms; Kevin Keith; Susan Mason, Charles Perdue; John Smale, Jr.; Suzanne Wren; and Matthew Young made key contributions to this report. Reserve Forces: Army Needs to Reevaluate Its Approach to Training and Mobilizing Reserve Component Forces. GAO-09-720. Washington, D.C.: July 17, 2009. Military Personnel: Reserve Compensation Has Increased Significantly and Is Likely to Rise Further as DOD and VA Prepare for the Implementation of Enhanced Educational Benefits. GAO-09-726R. Washington D.C.: July 6, 2009. Military Personnel: Army Needs to Focus on Cost-Effective Use of Financial Incentives and Quality Standards in Managing Force Growth. GAO-09-256. Washington, D.C.: May 4, 2009. Global War on Terrorism: Reported Obligations for the Department of Defense. GAO-09-449R. Washington, D.C.: March 30, 2009. Military Training: Improvement Continues in DOD’s Reporting on Sustainable Ranges, but Opportunities Exist to Improve Its Range Assessments and Comprehensive Plan. GAO-09-128R. Washington, D.C.: December 15, 2008. Force Structure: The Army Needs a Results-Oriented Plan to Equip and Staff Modular Forces and a Thorough Assessment of Their Capabilities. GAO-09-131. Washington, D.C.: November 14, 2008. Homeland Security: Enhanced National Guard Readiness for Civil Support Missions May Depend on DOD’s Implementation of the 2008 National Defense Authorization Act. GAO-08-311. Washington, D.C.: April 16, 2008. Force Structure: Restructuring and Rebuilding the Army Will Cost Billions of Dollars for Equipment but the Total Cost Is Uncertain. GAO-08-669T. Washington, D.C.: April 10, 2008. Military Readiness: Impact of Current Operations and Actions Needed to Rebuild Readiness of U.S. Ground Forces. GAO-08-497T. Washington, D.C.: February 14, 2008. Force Structure: Need for Greater Transparency for the Army’s Grow the Force Initiative Funding Plan. GAO-08-354R. Washington, D.C.: January 18, 2008. Force Structure: Better Management Controls Are Needed to Oversee the Army’s Modular Force and Expansion Initiatives and Improve Accountability for Results. GAO-08-145. Washington, D.C.: December 2007. Defense Logistics: Army and Marine Corps Cannot Be Assured That Equipment Reset Strategies Will Sustain Equipment Availability While Meeting Ongoing Operational Requirements. GAO-07-814. Washington, D.C.: September 19, 2007. Guard and Reserve Personnel: Fiscal, Security, and Human Capital Challenges Should be Considered in Developing a Revised Business Model for the Reserve Component. GAO-07-984. Washington, D.C.: June 20, 2007. Military Training: Actions Needed to More Fully Develop the Army’s Strategy for Training Modular Brigades and Address Implementation Challenges. GAO-07-936. Washington, D.C.: August 6, 2007. Military Personnel: DOD Needs to Establish a Strategy and Improve Transparency over Reserve and National Guard Compensation to Manage Significant Growth in Cost. GAO-07-828. Washington, D.C.: June 20, 2007. Reserve Forces: Actions Needed to Identify National Guard Domestic Equipment Requirements and Readiness. GAO-07-60. Washington, D.C.: January 26, 2007. Reserve Forces: Army National Guard and Army Reserve Readiness for 21st Century Challenges. GAO-06-1109T. Washington D.C.: September 21, 2006. Military Personnel: DOD Needs Action Plan to Address Enlisted Personnel Recruitment and Retention Challenges. GAO-06-134. Washington, D.C.: November 17, 2005. Military Personnel: Top Management Attention Is Needed to Address Long-standing Problems with Determining Medical and Physical Fitness of the Reserve Force. GAO-06-105. Washington, D.C.: October 27, 2005. Reserve Forces: Army National Guard’s Role, Organization, and Equipment Need to be Reexamined. GAO-06-170T. Washington, D.C.: October 20, 2005. Reserve Forces: Plans Needed to Improve Army National Guard Equipment Readiness and Better Integrate Guard into Army Force Transformation Initiatives. GAO-06-111. Washington, D.C.: October 4, 2005. Reserve Forces: An Integrated Plan is Needed to Address Army Reserve Personnel and Equipment Shortages. GAO-05-660. Washington, D.C.: July 12, 2005. Reserve Forces: Actions Needed to Better Prepare the National Guard for Future Overseas and Domestic Missions. GAO-05-21. Washington, D.C.: November 10, 2004. Reserve Forces: Observations on Recent National Guard Use in Overseas and Homeland Missions and Future Challenges. GAO-04-670T. Washington, D.C.: April 29, 2004.
Since September 11, 2001, the Army has heavily used its reserve components--the Army National Guard and Army Reserve--for ongoing operations even though they were envisioned and resourced to be strategic reserves. A congressional commission, the Department of Defense (DOD), and the Army have concluded the Army will need to continue to use its reserve components as an operational force. The transition will require changes to force structure as well as manning and equipping strategies that could cost billions of dollars. The 2009 Defense Authorization Act directed GAO to study this transition. This report provides additional information on (1) progress and challenges the Army faces, (2) to what extent the Army has estimated costs for the transition and included them in its projected spending plans, and (3) the effect of the operational role on the Guard's availability to state governors for domestic missions. GAO examined planning, policy, and budget documents, and relevant sections of Titles 10 and 32 of the U.S. Code; and met with DOD, Army, reserve component, and state officials. The Army is changing the organization and missions of some of its reserve units to provide more operational forces, and is increasing their personnel and equipment, but faces challenges in achieving the predictable and sustainable mobilization cycle envisioned for an operational force, primarily due to the high pace of operations. The Army is reorganizing its reserve units to match their active counterparts, is changing the missions of some units, has made plans to add over 9,000 personnel by 2013, and has requested almost $23 billion for reserve equipment since 2003. To guide the transition, DOD has established principles and policies, such as a 1-year limit on reserve mobilizations, and set a goal of providing reservists 5 years between mobilizations. However, heavy operational demands have meant that many reservists have had significantly less than 5 years between mobilizations. To make the most of the limited mobilization time available, DOD directed the services to provide sufficient resources to support reserve forces to be nearly ready to deploy before mobilization. In the past, reserve component forces often required significant time after mobilization to prepare individuals and units for deployment. However, the Army is continuing to need to improve readiness after mobilization by addressing medical and dental issues, or transferring personnel and equipment from nondeployed units to fill shortfalls. Until demand eases, it seems unlikely that the Army will be able to achieve the mobilization cycle it initially envisioned for the reserves. The Army developed initial cost estimates for transitioning its reserve components to an operational role, but has not budgeted for most of the costs it identified. A 2008 estimate identified costs of about $24 billion over a 6-year period from 2010 to 2015 to increase full-time support personnel, training days, recruiting and retention incentives, and installation support, among others. However, because the Army has not yet established the specific equipping, manning, and training levels required of an operational reserve, it is difficult to assess the estimate's validity. The Army established a task force to develop an implementation plan for the transition, and Army leadership is currently reviewing a draft plan and awaiting the results of other studies, such as a review of full-time support needs. However, pending the results of these studies and agreement on an implementation plan, the Army does not expect to budget for such costs until 2012. Best practices have shown that effective and efficient operations require detailed plans outlining major implementation tasks, metrics and timelines to measure success, and a comprehensive and realistic funding strategy. Until the Army finalizes an implementation plan and fully estimates the transition costs, and includes these costs in its projected spending plans, it will be difficult to assess the Army's progress in transitioning its reserve component to a sustainable operational force. The operational role has reduced the Guard's availability for domestic missions, but the effect on the states remains unclear because states mitigate shortfalls with mutual support agreements and requirements for some domestic missions remain undefined.
Salmonella and Campylobacter are bacteria that can cause disease in humans and animals. More than 2,500 different types of Salmonella— known as serotypes— exist, and 17 different types of Campylobacter exist—known as species. Salmonella live in the intestinal tracts of humans and animals, while Campylobacter live in the intestinal tracts of animals. Some serotypes cause illness in humans or in animals. According to CDC officials, Salmonella Enteritidis is a common serotype frequently associated with poultry. Salmonella Enteritidis causes the most human illnesses among other Salmonella serotypes, according to CDC’s 2011 Salmonella annual report.is caused by one species of the pathogen, called Campylobacter jejuni. For Campylobacter, most human illness Foodborne illness occurs when bacteria or other harmful substances are ingested. Poultry is an important source of human Salmonella and Campylobacter infections, but Salmonella and Campylobacter transmission is not limited to poultry products. Contact with infected animals and consumption of contaminated water and foods, including milk, eggs, and produce can also transmit the bacteria to humans. According to CDC’s website, typical symptoms of illness from Salmonella or Campylobacter are abdominal cramps, fever, and diarrhea. Salmonella infections are more likely than Campylobacter infections to lead to bloodstream infections, particularly for infants, the elderly, and people with weak immune systems. Salmonella bloodstream infections can lead to life-threatening conditions including meningitis. In rare cases, both Salmonella and Campylobacter infections can result in long-term secondary complications such as reactive arthritis, according to CDC’s website. To improve its food safety approach, FSIS has moved to an increasingly science-based, data-driven, risk-based approach. In 1996, FSIS adopted the risk-based Pathogen Reduction: Hazard Analysis and Critical Control Point (HACCP) regulations. rather than federal inspectors—is responsible for (1) identifying food safety hazards, such as fecal material, that are reasonably likely to occur and (2) establishing controls that prevent or reduce these hazards. As part of this approach, slaughter plants must develop plans that identify the point (known as the critical control point) where they will take steps to prevent, eliminate, or reduce each hazard identified. Under FSIS regulations, all plants must also have site-specific standard operating procedures for sanitation. FSIS inspectors at slaughter plants routinely check records to verify a plant’s compliance with those procedures. FSIS also has a verification testing program in which FSIS inspectors at slaughter plants collect samples of poultry products to determine whether a pathogen is present (known as positive rate). Test results help FSIS inspectors to verify that plant sanitation procedures are working and to identify and assist plants whose process controls may be underperforming. 61 Fed. Reg. 38806 (July 25, 1996). FSIS coordinates with numerous federal agencies, state agencies, and local entities to help ensure a safe poultry product from the farm to the consumer (known as the farm-to-table continuum—see fig. 1). For example, on the farm, USDA’s Animal and Plant Health Inspection Service (APHIS) administers voluntary programs to evaluate and certify that poultry are free of certain diseases. FSIS coordinates with APHIS to share information when investigating foodborne illnesses. FSIS also works with the Department of Health and Human Services’ Food and Drug Administration (FDA) to, for example, approve chemical interventions that slaughter plants use to reduce or eliminate Salmonella and Campylobacter.departments to respond to foodborne illness outbreaks, among other things. USDA’s actions in recent years to reduce Salmonella and Campylobacter contamination in poultry products have largely focused on reducing Salmonella, with the agency addressing Campylobacter more recently and to a lesser degree. Since 2006, USDA’s FSIS has taken a number of actions intended to reduce Salmonella contamination in poultry products. These actions include revising existing Salmonella standards; taking steps to develop standards; promoting enhanced information sharing with industry; publicizing noncompliance for chicken slaughter plants not meeting the agency’s Salmonella standards; developing a Salmonella Action Plan; and finalizing a rule to modernize the poultry slaughter inspection process. In March 2011, FSIS finalized revisions to the agency’s Salmonella standards for young chicken and turkey carcasses to further limit the amount of allowable contamination. Specifically, the revised standards set the expectation that no more than 7.5 percent of a plant’s young chicken carcasses (reduced from 20 percent) and 1.7 percent of a plant’s young turkey carcasses (reduced from 19.6 percent) will be contaminated with Salmonella. When FSIS revised the standards, it also made corresponding changes to the maximum number of samples allowed to test positive for contamination during agency testing; the results from the agency’s testing are used to determine whether or not a plant is in compliance with the agency’s standards. Specifically, for young chicken carcasses, FSIS now allows a maximum of 5 out of 51 samples collected by the agency to test positive for Salmonella, compared with the previous allowed positive rate of 12 out of 51. Similarly, for young turkey carcasses, FSIS now allows a maximum positive rate of 4 out of 56 samples tested by the agency, compared with the previous allowed positive rate of 13 out of 56. To help industry meet these revised standards, FSIS issued an update to its compliance guideline for Specifically, the guidelines controlling Salmonella and Campylobacter.articulate how industry can meet FSIS expectations regarding control of food safety hazards, including control points for Salmonella and Campylobacter. To verify a chicken slaughter plant’s compliance with the agency’s Salmonella standard, FSIS inspectors collect and test one young chicken carcass per day for 51 consecutive days and determine whether the number of positive results in that sample set of 51 is above the maximum allowed. For turkey slaughter plants, FSIS inspectors collect and test one young turkey carcass per day for 56 consecutive days and likewise determine whether the number of positive results is above the maximum allowed. In addition, the agency has recently begun taking steps to strengthen its Salmonella standards for ground poultry. More specifically, in December 2012, FSIS announced that it planned to perform additional Salmonella sampling and testing of ground chicken and ground turkey products as part of an agency effort to revise the existing standards for those products—which currently set the expectation that no more than 44.6 percent of a plant’s ground chicken and 49.9 percent of a plant’s ground turkey will be contaminated with Salmonella. FSIS officials told us the agency has begun performing additional Salmonella sampling and testing of these products. According to an April 2014 Federal Register notice, FSIS intends to announce and request comment in the Federal Register on the proposed revisions to the ground poultry standards before the end of fiscal year 2014. (See table 1 for details regarding FSIS Salmonella standards for poultry products.) Since 2012, FSIS has taken steps to develop standards for poultry parts and mechanically separated poultry, for which there currently are no standards. For example, FSIS intends to develop a Salmonella standard for raw chicken parts. According to FSIS officials, the agency is determining which chicken parts, such as thighs, breasts, and wings, will be subject to the new standard and intends to announce that information in a future Federal Register notice. Moreover, in June 2013, FSIS began additional Salmonella sampling to determine the estimated prevalence of Salmonella in mechanically separated poultry products. FSIS announced in April 2014 that it would continue to test samples of mechanically separated poultry products for Salmonella and analyze the results as part of the agency’s efforts to develop Salmonella standards for those products. According to the agency’s December 2013 Salmonella Action Plan, FSIS plans to complete a risk assessment, which will be used to estimate the impact of this action, including any potential change in human health, and develop new standards for poultry parts and mechanically separated poultry by the end of fiscal year 2014. 71 Fed. Reg. 9772 (Feb. 27, 2006). plants agreed to collect their own product samples every day during each shift, test the samples for common foodborne pathogens including Salmonella and Campylobacter, and then share these data with FSIS. In July 2011, FSIS announced that plants operating under regulatory waivers previously granted by the agency would have to join the Salmonella Initiative Program in order to continue operating under those waivers. In September 2013, FSIS completed initial analyses of program data showing that participating plants have maintained consistent process controls to keep levels of Salmonella within the standards while operating under waivers from the time and temperature chilling requirements. FSIS reported that, as of June 2014, 158 of 281 (56 percent) poultry slaughter plants were participating in the program. In February 2006, to encourage industry to produce safer poultry products, FSIS announced in a Federal Register notice that completed sample set results would be recorded in one of three categories based on the plants’ ability to meet existing Salmonella standards.according to an FSIS document on inspection methods, those plants in Category 1 are said to be demonstrating consistent process controls to meet the existing Salmonella standards and that plants in Categories 2 and 3 are not. Specifically, FSIS defines the three categories as follows: Category 1 plants have results from their two most recently completed sample sets that are at or below half of the existing standard, meaning, for example, that for young chicken carcasses, this would be 2 positive samples out of a set of 51; Category 2 plants have results from their most recently completed sample set that are higher than half of the existing standard but do not exceed the standard, meaning, for example, that for young chicken carcasses, this would be 3 to 5 positive samples out of a set of 51; and Category 3 plants have results from their most recently completed sample set that exceed the existing standard for Salmonella, meaning, for example, that for young chicken carcasses, this would be 6 or more positive samples out of 51. In the February 2006 Federal Register notice, FSIS also announced that it would use the categories to determine the frequency of the agency’s Salmonella verification testing. According to FSIS’s sampling methodology, Category 1 plants are tested at least once every 2 years; Category 2 plants are scheduled for testing at least once a year until their category changes (e.g., a plant improves to Category 1); and Category 3 plants are scheduled for testing as close to continuously as possible until they produce better results and their category changes. In addition, in a January 2008 Federal Register notice, FSIS announced that it planned to begin publishing on the agency’s website the results for young chicken slaughter plants that were inconsistent in complying with existing Salmonella standards, stating that it believed making such information available to the public would provide an incentive to industry to attain “consistent, good control for Salmonella.” The agency began publishing this information on its website in March 2008. At present, FSIS publishes the names of young chicken slaughter plants that are in Category 3. For example, in August 2014, FSIS published on its website the names of seven young chicken slaughter plants found to be in Category 3. Under FSIS policy, the names of young turkey slaughter plants not in compliance with the agency’s young turkey standard can also be published on FSIS’s website. As of August 2014, there are no names of turkey plants posted on the website because no turkey plants were in Category 3 based on recent testing. FSIS’s Salmonella Action Plan, issued in December 2013, details a priority list of actions the agency plans to undertake as part of its continued efforts to address Salmonella in poultry products. According to the plan, FSIS intends to, among other things, (1) conduct food safety assessments at plants that produce ground and mechanically separated poultry products by the end of fiscal year 2014; (2) consider modifying the way it publishes the category status of poultry plants, such as by publishing on its website the names of plants in Category 1 and Category 2, in addition to those in Category 3, by the end of fiscal year 2014; (3) develop new enforcement strategies that take into account plants’ compliance history and Salmonella category under the standards, among other things, which, according to the plan, will take over a year to accomplish; and (4) host a meeting with APHIS and other stakeholders to focus on poultry farm practices that could help decrease Salmonella contamination on FSIS-regulated poultry products and use the information gathered to inform best practice guidelines, requiring the completion of additional actions prior to the meeting. On August 21, 2014, FSIS published its final rule to amend the agency’s poultry slaughter inspection process. Young chicken and turkey slaughter plants may choose to operate under the new poultry inspection system included in the rule or may continue to operate under the current inspection system. According to FSIS’s final rule, modernizing poultry slaughter inspections will play a role in reducing Salmonella and other poultry pathogen contamination. Currently, FSIS inspectors conduct a variety of duties at positions on and off the slaughter line. For example, FSIS inspectors positioned on the line conduct inspections of every poultry carcass and its parts for defects, and inspectors working off the line move freely about the plant and collect samples of carcasses to test for pathogens (e.g., Salmonella); perform food safety checks, such as verifying that carcasses are free of fecal material; and ensure that carcasses comply with the agency’s food quality standards for defects such as bruises on chickens, which do not affect food safety. For those poultry slaughter plants that choose to operate under the new poultry inspection system, plant employees would assume more responsibility for conducting the types of activities currently performed by FSIS inspectors on the slaughter line. For example, plant personnel would be responsible for identifying defects in carcasses, taking corrective actions if the defects can be corrected through trimming, and condemning unacceptable carcasses as part of on-line inspections. According to FSIS’s final rule, these changes will allow FSIS to assign fewer inspectors for on-line inspections and allow inspectors to conduct more off-line inspections in plants operating under the new poultry inspection system. Moreover, according to the rule, the new poultry inspection system may facilitate reduction of pathogen levels in poultry by permitting FSIS to conduct more food-safety-related off-line inspection activities and allowing better use of FSIS inspection resources, among other things. In particular, in July 2014, FSIS issued an updated risk assessment that estimated that there would be a reduction of 3,980 Salmonella illnesses attributable to young chicken and turkey plants combined. FSIS has also taken other actions that are intended to help reduce Salmonella contamination in poultry products. For example, in light of several Salmonella outbreaks associated with the consumption of ground turkey products, FSIS announced in a December 2012 Federal Register notice that it was requiring all poultry plants producing ground or mechanically separated chicken and turkey products to reassess their HACCP plans. In May 2013, FSIS instructed its inspectors at plants producing these types of products to verify that the plants had reassessed their HACCP plans. In addition, FSIS announced that it would expand its Salmonella verification testing program—which previously was limited to ground chicken and ground turkey—to include plants producing all forms of ground and mechanically separated chicken and turkey products. FSIS began taking actions specifically aimed at reducing Campylobacter contamination in poultry products in 2011. Specifically, in July 2011, FSIS implemented the first standards that define the amount of allowable contamination from Campylobacter for young chicken and turkey carcasses. Under the new standards, FSIS set the expectation that no more than 10.4 percent of a plant’s young chicken carcasses and 0.79 percent of a plant’s young turkey carcasses will be contaminated with Campylobacter. In addition, FSIS established maximum numbers of positive samples allowed during agency testing: 8 out of 51 samples for young chicken carcasses and 3 out of 56 samples for young turkey carcasses (see table 2). FSIS is also considering other actions to address Campylobacter contamination in poultry products. According to FSIS officials, the agency plans to announce and request comment on a proposed standard for Campylobacter for raw chicken parts in the Federal Register by the end of fiscal year 2014. Furthermore, according to a May 2014 letter from the Secretary of USDA, FSIS is investigating the appropriateness of Campylobacter standards for ground and mechanically separated poultry products. FSIS did not take actions specifically aimed at reducing Campylobacter contamination before 2011, in part, because the agency believed its actions to reduce Salmonella would also reduce other pathogens such as Campylobacter. Moreover, according to documents from FSIS and the National Advisory Committee on Microbiological Criteria for Foods, it was not until 2005 that the agency began using a less time-consuming and more reliable sampling method for determining the presence of Campylobacter on poultry products. In addition, the agency was concerned about the observed increases in Salmonella rates, such as an increase in the percentage of positive Salmonella results during agency testing from 11.5 percent in 2002 to 16.3 percent in 2005. To help assess the effects of the agency’s actions on the incidence of human illnesses from Salmonella and Campylobacter contamination in poultry products, FSIS has developed performance measures and conducted research, but these efforts fall short in two ways: (1) the agency did not establish performance measures for certain commonly consumed poultry products or Campylobacter and (2) the agency has relied on data with limitations that affect their usefulness. First, consistent with requirements of the Government Performance and Results Act of 1993 (GPRA) to measure performance toward the achievement of agency strategic goals, FSIS established a performance measure for Salmonella contamination in young chicken carcasses. We previously concluded that performance measures, which typically have numerical targets, are important management tools that help an agency monitor and report progress toward its goals. FSIS’s Salmonella measure indicates whether agency actions to ensure compliance with the applicable standard are helping the agency meet its goal. The agency’s goal is to maximize domestic and international compliance with food safety policies, which aligns with USDA’s objective of protecting public Specifically, the measure tracks the health by ensuring food is safe.percentage of poultry slaughter plants complying with the Salmonella standard for young chicken carcasses; the agency then compares this percentage to a target level of compliance to monitor progress in meeting its goal. For example, FSIS set the fiscal year 2013 target at 91 percent compliance and reported that 90 percent of plants complied with the standard. According to FSIS’s 2011-2016 strategic plan and a National Academy of Sciences report, verification testing data available on the exposure of the public to raw poultry products contaminated with Salmonella, including young chicken carcasses, provide a reasonable proxy for the relative risk associated with those products. However, the agency has not established similar performance measures and targets for other types of commonly consumed poultry products for which it has established Salmonella standards—that is, young turkey carcasses, ground chicken, and ground turkey. The agency has not established performance measures even though the standards for young turkey carcasses have been in place since 2005, with a revision in 2011, and the standards for ground chicken and ground turkey have been in place for more than a decade. The majority of poultry that industry markets and Americans consume is ready-to-eat and further processed products, such as ground poultry and poultry parts, according to the United States International Trade Commission 2014 report. Salmonella- contaminated ground poultry and poultry parts put consumers at greater risk of becoming ill than whole poultry carcasses because these products generally are more likely to be contaminated. As previously noted, FSIS is developing a Salmonella standard for raw chicken parts; therefore, the agency has not developed corresponding performance measures and targets. Moreover, FSIS has not established performance measures for Campylobacter contamination, even though it implemented standards for this pathogen in 2011. According to FSIS officials, at the time that the 2011 to 2016 strategic plan was written, FSIS had only recently implemented the first ever Campylobacter standards for poultry, and so the agency did not have a basis to create a performance measure for that pathogen. FSIS officials also stated that, in the absence of performance measures, the agency routinely collects and reviews data on individual plants’ compliance with Salmonella and Campylobacter standards and reports quarterly the results of pathogen testing on its website. According to FSIS officials, these data are sent to FSIS leadership, and trends are highlighted, as appropriate. FSIS officials agreed that performance measures and targets should be developed for additional poultry products and stated that the agency will review its current strategic plan to determine what further updates are needed, namely new performance measures for ground chicken and ground turkey. As previously mentioned, FSIS established Salmonella standards for ground chicken and ground turkey in 1996, and the agency has yet to establish corresponding performance measures and targets. According to FSIS officials, it is not appropriate for the agency to set a performance measure or target for Salmonella contamination in ground chicken or ground turkey until it finishes revising the standards for those products, which it expects to do by the end of fiscal year 2014. In addition, for Campylobacter contamination in young chicken and turkey carcasses, FSIS officials told us that the agency has not developed performance measures in part because it is still in the process of to developing plant categories, similar to those for Salmonella,determine levels of plant compliance with Campylobacter standards. The agency will publish Campylobacter categories before December 31, 2014, according to FSIS’s fiscal year 2014 annual performance plan. FSIS officials told us that the agency has not developed a performance measure for determining the agency’s success in controlling Salmonella in young turkey carcasses because testing data have shown that young turkey plants where the agency has routinely collected and reviewed data are meeting the standard. However, FSIS stated in its fourth quarter calendar year 2013 report on pathogen testing that 2 out of 35 (about 6 percent) of the turkey slaughter plants were in Category 3, meaning these plants did not meet the standard. In July 2014, FSIS officials told us these plants are no longer in Category 3 based on recent testing. However, without performance measures and targets for compliance with standards for these pathogens in commonly consumed poultry products, FSIS cannot quantitatively gauge its progress in assessing the effects of its actions related to these standards toward meeting the agency’s goal of maximizing domestic compliance with food safety policies and, ultimately, protecting public health. Performance measures and targets are reported in the agency’s strategic and annual plans but, in the absence of such measures and targets, performance information such as the trends supplied to FSIS leadership, is not being publicly reported. Without publicly reporting such information, FSIS loses the opportunity to enhance transparency by providing this information to the public and Congress about its progress in meeting this important goal, potentially limiting oversight and accountability. In addition to the performance measure for young chicken carcasses, FSIS established, in 2009, an “all-illness” performance measure to evaluate its efforts to reduce foodborne human illness resulting from consumption of FSIS-regulated products contaminated with Salmonella and two other pathogens. According to FSIS officials, the all-illness measure is an estimate of the number of illnesses from these pathogens resulting from the consumption of all FSIS-regulated products—including meat, poultry, and processed egg products. The officials said FSIS develops this estimate using CDC laboratory-confirmed illness data, CDC outbreak data, U.S. census data, and foodborne illness estimation data from a CDC peer-reviewed publication. The all-illness measure has an associated target level of performance, set by the agency, for the maximum number of human illnesses from these pathogens attributed to FSIS-regulated products. For example, FSIS set the fiscal year 2013 target at 394,770 human illnesses from the three pathogens, including Salmonella. USDA reported in 2013 that FSIS did not meet its target; there were 427,171 human illnesses. As part of the all-illness measure, FSIS also set individual targets for the maximum number of human illnesses from each of the three pathogens attributed to FSIS-regulated products, including a Salmonella target. The annual report did not break out the results for each of the three pathogens, but the agency’s fiscal year 2013 year-in-review report showed that FSIS did not meet the Salmonella target specifically. According to FSIS officials, historically the agency has not met the Salmonella target for the all-illness measure. However, recent actions FSIS has taken and plans to take are intended to address human illnesses from Salmonella attributed to FSIS regulated products, according to the agency’s Salmonella Action Plan. According to USDA’s National Advisory Committee on Microbiological Criteria for Foods, which provides impartial, scientific advice to federal food safety agencies, the impact of FSIS’s regulatory activities on the incidence of human illnesses from pathogens cannot be measured directly because of limitations in the foodborne illness attribution data the agency uses. For example, the outbreak data available from CDC depend on voluntary reporting of illnesses and do not always identify the food product that caused an outbreak; we have previously found limitations in these data, including delayed reporting and incompleteness. According to FSIS officials, as with most performance measures that seek to evaluate human health outcomes, the all-illness measure is subject to limitations based on the availability of data and the challenges in capturing accurate foodborne illness attribution. In commenting on this report, USDA agreed that there are limitations in the data but stated the agency uses the best available data. FSIS is considering including Campylobacter in the all-illness measure as well, but there is no broadly accepted estimate for the proportion of illnesses attributed to Campylobacter, according to FSIS officials. In 2011, FSIS, CDC, and FDA formed an interagency group to improve food safety data and coordinate analyses. Some of the group’s efforts involve identifying links between contamination of poultry products and human illness, among other things.are working collaboratively to perform a detailed analysis of data on Salmonella outbreaks to better estimate the proportion of human illnesses caused by different food sources, including poultry products. According to FSIS officials, this effort will assist in better estimating the proportion of Salmonella illnesses associated with poultry products, and the agencies plan to present results from ongoing projects in 2015. For example, FSIS, CDC, and FDA In addition to performance measures, FSIS conducted research to assess the effects of its actions on the incidence of human illnesses from the consumption of poultry products, but this research also has data limitations. For example, in February 2012, after revising its Salmonella standards for young chicken carcasses, FSIS completed research evaluating whether reductions of Salmonella contamination from young chicken carcasses across the industry would offer public benefits, in the form of reduced human illness rates. However, in attempting to evaluate the effect of reductions of Salmonella contamination across the industry, FSIS relied on its verification testing data from individual plants, which the agency later concluded in April 2012 cannot be used to estimate prevalence across the industry because the agency does not randomly select plants for verification testing, among other things. FSIS also used CDC outbreak and individual illness case data in its research to identify the number of human illness from Salmonella contamination but, as we mentioned above, these data do not always distinguish illnesses derived from poultry products specifically. While FSIS’s research efforts are a positive step, data limitations make it difficult to directly correlate agency actions to reductions in the rates of human illness from poultry products contaminated with Salmonella. To help address these limitations, FSIS has taken steps, such as developing a statistical model in 2012, to estimate the reduction in human illnesses from revised Salmonella and newly created Campylobacter standards. More importantly, in 2013, FSIS created a new testing approach for ground poultry to estimate prevalence of Salmonella, among other things. According to FSIS officials, the new verification testing program includes continuous weekly sampling and testing at all poultry slaughter plants producing raw ground poultry and increases the sensitivity of analysis so that lower levels of contamination can be detected. FSIS officials told us that the agency plans to expand this approach to other poultry products. According to FSIS’s working group on prevalence estimates, testing for prevalence is necessary in order for the agency to effectively measure or understand how contamination rates change over time; set standards; develop targeted interventions; and measure the agency’s performance toward meeting FSIS long-term strategic goals. The new testing approach using continuous sampling affords a more direct measure of prevalence across the industry, according to FSIS officials. We identified several challenges that FSIS faces in reducing Salmonella and Campylobacter contamination in poultry products and one potential challenge. These include limited control outside of slaughter plants, pathogens not designated as hazards, limited enforcement authority, absence of mandatory recall authority, outdated or nonexistent standards, insufficient prevalence estimates, the complex nature of Salmonella, and limited Campylobacter research and testing. We identified these challenges and the potential challenge based on our analyses and the views of representatives of 11 stakeholder groups, a number of academic researchers, and FSIS officials; the stakeholder groups representing consumers and those representing industry generally had differing views. FSIS faces a challenge in reducing Salmonella and Campylobacter contamination in poultry products outside of slaughter plants because the agency does not have regulatory jurisdiction over (1) farm practices to reduce contamination in live poultry before they reach a plant or (2) some factors that may affect contamination of poultry products once they leave a plant. According to FSIS officials, they would like to address on-the- farm problems, but the agency is limited in its activities related to farms because its jurisdiction starts when products enter slaughter plants. In 2010, FSIS published updated compliance guidelines that detail, among other things, several on-farm practices to reduce Salmonella and Campylobacter in live poultry. For example, the guidelines recommend that farms test water to make sure it is free of pathogens and ensure water stations are free of leaks. As of July 2014, FSIS’s compliance guidelines do not discuss the effectiveness of each recommended practice to reduce pathogens in live poultry. Thus, FSIS has not provided complete information to the poultry industry about the potential benefits of adopting certain practices. In contrast, when the agency developed guidelines for on-farm practices for reducing E. coli O157:H7 in beef cattle, it described several practices and their effectiveness. In 2011, USDA’s National Advisory Committee on Meat and Poultry Inspection recommended that (1) FSIS coordinate with APHIS, among other agencies, to develop best practices on the farm and (2) develop compliance guidelines for livestock and poultry producers, including information on the effectiveness of the practices in controlling pathogens. Representatives from two consumer groups told us that because the poultry industry is vertically integrated—meaning that individual poultry companies own or contract for all phases of production and processing—it is well suited to implement on-farm best practices to help ensure healthier birds prior to slaughter. As previously mentioned, according to FSIS’s December 2013 Salmonella Action Plan, the agency will continue to work with industry to identify on-farm best practices, host a meeting with APHIS and other stakeholders to focus on on-farm practices that could help decrease Salmonella contamination on FSIS- regulated poultry products, and use the information gathered from these actions to inform future policies and compliance guidelines. FSIS officials told us that the agency is currently working with stakeholders such as FDA and APHIS to gather information about on-farm practices. However, even with the planned actions identified in the agency’s Salmonella Action Plan, it remains unclear whether FSIS intends to incorporate information on the effectiveness of all practices in the guidelines as the National Advisory Committee on Meat and Poultry Inspection recommended. FSIS officials told us that the agency will publish another revision of the compliance guidelines by the end of calendar year 2014 but did not respond when asked directly about whether they would incorporate such information. Without providing information on the effectiveness of these practices in future guidelines, FSIS is not fully informing industry of the potential benefits of adopting them to encourage implementation of recommended practices. However, after poultry products leave a plant, FSIS has authority to ensure that poultry products are correctly labeled and packaged. contamination of poultry products. For example, cross-contaminationfrom poultry products can occur at retail establishments, in restaurants, and in consumers’ homes, according to a food safety researcher we interviewed. According to FSIS officials, the agency has been aggressive in educating consumers on the importance of safe handling of raw poultry products, such as through an advertising campaign and changes to the safe handling label. In 2014, FSIS proposed enhancing the safe food handling label for poultry products packaged for consumers to include updated information on proper handling. Federal regulations require plants to conduct a hazard analysis to determine food safety hazards “reasonably likely to occur” in the production process and identify the preventive measures the plant can apply to control those hazards. According to FSIS directive 5000.6, Rev.-1, all plants must conduct a hazard analysis that is an evaluation by a plant of its operations to determine the food safety hazards specific to the plant’s operations that, if not controlled, are reasonably likely to occur and to cause injury or illness. Some plants may not have a HACCP plan because they can support that there is not a food safety hazard that is reasonably likely to occur; instead, the plants would maintain a record of their hazard analysis for inspection purposes. reviewed, FSIS issued a “Notice of Intended Enforcement” action, which warns the plant before the initiation of a specific enforcement action, based on the company’s inability to support why it had not designated Salmonella as a hazard reasonably likely to occur in its HACCP plan. FSIS officials we interviewed believe the agency should require plants to identify Salmonella and Campylobacter in their HACCP plans as hazards reasonably likely to occur. FSIS’s final rule for modernizing poultry slaughter inspection requires plants to develop, implement, and maintain written procedures to prevent contamination by enteric pathogens,as Salmonella and Campylobacter. FSIS faces a potential challenge in reducing Salmonella contamination in poultry products because, according to the agency and some stakeholder groups, its authority to enforce its Salmonella standards is limited for two reasons: (1) a federal court ruling and (2) FSIS has not classified Salmonella as an adulterant in raw poultry products. First, in 2000, a federal court ruled that FSIS could not withdraw inspectors, which would effectively shut down the plant, based solely on a plant’s failure to meet Salmonella standards. A federal appeals court upheld the decision in 2001. Subsequently, the agency adopted the position that the court ruling did not affect its ability to use the standards as part of verifying a plant’s sanitation and HACCP plans. For example, after a plant fails Salmonella testing for its first sample set, FSIS can require a reassessment of the plant’s HACCP plan and then conduct a food safety assessment (evaluation of a plant’s food safety system); conduct additional sampling; or issue a “Notice of Intended Enforcement” action, according to FSIS officials. FSIS can also condemn products that are contaminated with filth (or otherwise adulterated) or mislabeled, or it can condemn parts of products, and detain them so they cannot progress down the marketing chain. Even with these tools, representatives from four out of six consumer groups we interviewed told us that the agency does not have sufficient authority to ensure plants comply with FSIS’s standards because FSIS cannot shut down plants when they fail Salmonella standards alone. Representatives of all four industry groups we interviewed disagreed and stated that FSIS has sufficient authority to ensure plants comply with standards because the agency has broad statutory authority and oversight. Second, because FSIS has not classified Salmonella as an adulterant in raw poultry products, products contaminated with this pathogen generally are permitted to enter commerce. However, according to FSIS officials, the agency can consider raw poultry products contaminated with Salmonella as adulterated on a case-by-case basis—for example, during a voluntary recall, as discussed later in this report. Representatives from five out of six of the consumer groups we interviewed said they believe that some serotypes of Salmonella should be declared an adulterant, such as those with specific antibiotic-resistance patterns. According to CDC officials, antibiotic resistance can be associated with a higher risk of hospitalization in infected individuals. For example, in two of the four Salmonella outbreaks we reviewed, ill persons were hospitalized twice as frequently as is normally seen in Salmonella outbreaks, according to CDC officials. Since 2013, more than a dozen consumer groups have supported a petition for FSIS to declare specific antibiotic-resistant serotypes of Salmonella as adulterants when found in poultry. In July 2014, FSIS denied one of the consumer group’s petitions to have antibiotic-resistant Salmonella declared an adulterant. FSIS officials told us that they have found no conclusive scientific evidence that antibiotic- resistant strains of Salmonella or Campylobacter have a greater resistance to interventions currently used in FSIS-inspected poultry plants, but the agency continues to review the relevant scientific evidence to identify any potential challenges that these serotypes may present to public health. Representatives from a government employee stakeholder group we interviewed said that rather than classifying all of Salmonella an adulterant in raw poultry products, FSIS should consider the top Salmonella serotypes causing most human illnesses. For example, the representatives said that FSIS should consider declaring a narrow range of Salmonella serotypes in select raw poultry products as adulterants similar to the classification of E. coli O157:H7 as an adulterant in beef. CDC has identified three Salmonella serotypes in poultry associated with causing the highest number of human illnesses: Salmonella Enteritidis, Salmonella Typhimurium, and Salmonella Heidelberg. According to FSIS’s risk-based inspection protocols, FSIS considers the top Salmonella serotypes identified through CDC data when ranking plants to determine the frequency of verification testing and inspections. Moreover, according to FSIS documents, the agency provides plants with data on which Salmonella serotypes were identified through verification testing. FSIS officials told us prior court cases have set a precedent that the presence of Salmonella in raw poultry products is not sufficient to declare the pathogen an adulterant because Salmonella can be killed through proper cooking. Representatives from all four industry groups we interviewed disagreed that any serotypes of Salmonella should be classified as adulterants for several reasons. Representatives from two industry groups told us rapid identification of the serotype is not available and that it can take several weeks for FSIS to identify specific serotypes from positive Salmonella verification test results. Representatives from another industry group we spoke with said Salmonella is already classified as an adulterant in fully cooked poultry products and that, for raw poultry products, FSIS includes instructions on proper handling and thorough cooking to prevent cross- contamination and eliminate the pathogen. In contrast with Salmonella, Campylobacter has received less attention from FSIS and stakeholder groups, in part because the pathogen is not frequently associated with outbreaks, making it difficult to attribute illnesses to this pathogen. FSIS faces a challenge in reducing Salmonella and Campylobacter contamination in poultry products because it does not have mandatory food recall authority similar to that of FDA. In 2011, Congress passed the FDA Food Safety Modernization Act, giving FDA mandatory recall authority. We recommended in October 2004 that Congress should consider legislation to increase FSIS’s authority to include mandatory recalls, but the agency continues not to have such enforcement authority. Instead, FSIS can issue public health alerts or request voluntary recalls, among other actions, to protect human health from potentially contaminated meat and poultry products. Before requesting a voluntary recall, FSIS must gather sufficient evidence through its investigation and determine that a product is adulterated or mislabeled, among other things. According to FSIS officials, the agency requests a voluntary recall when the agency links a product to an ill person and obtains specific information on the source of the product (plant), product type, production date, and product distribution. In the five outbreaks we reviewed, three of the five companies voluntarily recalled or stopped distributing poultry products implicated in outbreaks before FSIS requested a recall. For the fourth outbreak, in July 2014, FSIS requested that Foster Farms conduct a voluntary recall of select chicken parts involved in an outbreak, which had been ongoing since March 2013, after definitively linking the product to an ill person; the company recalled the products. By contrast, in the fifth outbreak, CDC reported that collaborative investigative efforts by local, state, and federal officials indicated that Foster Farms chicken products were the most likely source of an outbreak of Salmonella Heidelberg that took place from June 2012 until May 2013. However, FSIS did not request, and Foster Farms did not conduct, a voluntary recall because the agency was unable to definitively link the company’s product to an ill person (see app. II). FSIS officials told us that, rather than focusing on the lack of mandatory recall authority, it is more productive to work aggressively with the tools they have. For example, FSIS officials told us that withdrawing inspectors or withholding the agency’s mark of inspection, thus preventing poultry products from entering commerce, can be as effective and faster tool for keeping unsafe food from the marketplace than FDA’s recall authority. Poultry Production and Consumption U.S. consumption of poultry products (chicken and turkey) is considerably higher than pork or beef, but less than total red meat consumption. According to USDA’s Economic Research Service, the U.S. poultry industry is the world's largest producer and the second- largest exporter of poultry. Before the 1970s, poultry was largely retailed on a “whole bird” basis. For example, chicken meat sold as parts, such as wings and breasts, was a small component of the domestic U.S. market. Chicken meat retailed as parts came about largely as a consequence of the inspection process at slaughter plants; that is, the carcass of a whole raw chicken that failed inspection would be cut to remove the part of the whole bird that caused inspection failure. The remainder of the bird was then further broken down and marketed as chicken parts. Market changes began to indicate to processors that consumers preferred particular chicken parts rather than whole birds. To satisfy consumers, processors began to break whole chickens into parts for retail sale. Trays of whole birds broken into constituent parts evolved into packages or bags of drumsticks, wings, and breasts, among other products. FSIS faces a challenge in reducing Salmonella and Campylobacter contamination in poultry products because of outdated or nonexistent standards. FSIS does not revise and develop standards frequently enough to reflect changes in industry practices and poultry consumption patterns. For example, it has taken FSIS nearly 2 decades to begin revising its Salmonella standard for ground chicken. The FSIS standard established in 1996 for ground chicken set an expectation that up to 44.6 percent of a plant’s production could be contaminated with Salmonella without the plant being required to take corrective action. According to industry groups we spoke with, since the implementation of the standard, industry developed newer technologies to reduce contamination below the levels in the standard. FSIS officials agreed and stated that the majority of poultry slaughter plants perform at levels better than the standard. In addition, as noted earlier, FSIS has not completed development of a Salmonella standard for chicken parts, even though chicken parts are now more frequently consumed than whole chickens. As a result, plants are able to meet standards for young chicken carcasses but then have an outbreak associated with chicken parts, such as occurred with the 2013 Foster Farms Salmonella outbreak. According to FSIS officials, revising standards takes time and resources, in part because the agency must first collect data to estimate the prevalence of pathogens in FSIS- regulated products, notify the public of proposed standards, and open a comment period, all of which can take years. As previously mentioned, FSIS expects to announce and request comment on the proposed Salmonella standard for chicken parts by the end of fiscal year 2014. FSIS faces a challenge in reducing Salmonella and Campylobacter contamination in poultry products as a result of not having sufficient prevalence estimates. FSIS collects and analyzes data to estimate the prevalence of pathogens when the agency revises or creates standards for its regulated products. However, as discussed above, standards are not created or revised often, and agency officials we interviewed agree that this type of data collection and analysis is done infrequently. Moreover, the data from FSIS’s verification testing program makes it difficult for the agency to assess contamination levels of poultry pathogens across the entire industry because the verification testing program was not designed to assess prevalence of pathogens industry- wide, and the agency does not randomly select plants for inspection. Also, as we said earlier, in April 2012, FSIS concluded that its verification testing data from individual plants cannot be used to estimate prevalence across the industry because the agency does not randomly select plants for verification testing. According to USDA’s National Advisory Committee on Microbiological Criteria for Foods, estimating the prevalence of pathogens in food is critical to understanding and addressing the public health risk of foodborne illness, and these estimates provide a mechanism for measuring performance against public health goals, among other things. Similarly, the majority of representatives of consumer groups and some industry groups we spoke with stated that FSIS needs estimates of prevalence of Salmonella in poultry products to set its Salmonella public health goal. As previously mentioned, FSIS recently created a new testing According to FSIS approach for ground poultry to estimate prevalence. officials, the agency plans to propose adopting this new testing approach for all of its poultry products, which would allow for more frequent data collection and improve prevalence estimates, among other things. The agency is using the results from the new testing approach to revise its performance standards for raw ground poultry products and create new standards for mechanically separated poultry. FSIS faces a challenge in reducing Salmonella contamination in poultry products because of the complex nature of Salmonella. The majority of the representatives from industry and consumer groups we interviewed, as well as FSIS officials, agreed that Salmonella is difficult to control in poultry products because it is widespread in the natural environment. For example, according to CDC officials, there are over 2,500 serotypes of Salmonella that have been identified—not all of which are equally harmful to humans. In many cases, most serotypes are rarely involved in human illness cases and outbreaks. In addition, a food safety researcher we interviewed mentioned that some serotypes of Salmonella are more likely to cause human illness; therefore, it is important to understand the genetic makeup of each serotype to determine which ones are more or less likely to cause human illness. Furthermore, the serotypes that are important in human disease and food contamination can differ considerably in different parts of the world, and different serotypes may also be associated with different animal hosts, habitats, and paths of transmission, according to CDC officials. Reducing levels of Campylobacter in poultry products poses a challenge for FSIS in part because less is known about Campylobacter. In addition, CDC officials told us that Campylobacter is less likely to be associated with outbreaks. Furthermore, technologies to detect Campylobacter might underdiagnose cases and the methods used by many diagnostic laboratories to isolate Campylobacter from samples are not standardized. Therefore, the efficacy of these tests varies considerably. Some countries such as New Zealand experienced greater success in reducing Campylobacter levels from poultry products, where Campylobacter cases were reduced by approximately 59 percent from 2006-2008 after its government and industry implemented several proactive measures and alterations to critical control points. New Zealand is leading the international risk-based framework for Campylobacter control in poultry. Representatives from an industry group cautioned, however, that although the decline in illnesses in New Zealand is impressive, it is difficult to extrapolate this success to other parts of the world. FSIS officials also cautioned that the agency’s ability to measure a reduction in Campylobacter illnesses will depend on its ability to attribute Campylobacter illnesses to poultry and other food types and said that an interagency analysis with CDC is under way to improve such attribution. Ensuring the safety of poultry products is critical because Americans consume considerably more poultry products than beef or pork. To help ensure the safety of poultry products, USDA’s FSIS has transitioned to an increasingly science-based, data-driven, risk-based approach. As a part of this approach, USDA’s FSIS has taken several actions to reduce Salmonella and Campylobacter contamination in poultry products to protect human health, including tightening existing standards for Salmonella contamination in young chicken and turkey carcasses, as well as developing a Salmonella Action Plan. FSIS has also finalized a rule to modernize the poultry slaughter inspection process which, according to the agency, will play a role in reducing Salmonella and other poultry pathogen contamination. To assess the effects of its actions on the incidence of human illness from Salmonella and Campylobacter, FSIS has developed performance measures and associated targets for young chicken carcasses to monitor whether activities to bring plants into compliance with the standards are meeting the agency’s goals, which is consistent with requirements of GPRA to measure performance toward achievement of agency strategic goals. However, the agency has not developed performance measures and targets for certain commonly consumed poultry products, in particular ground chicken and ground turkey, even though the standards for Salmonella contamination in these products have been in place since 1996. Similarly, it has not developed performance measures for Campylobacter contamination in young chicken and turkey carcasses, or for Salmonella contamination in turkey carcasses. FSIS believes it is not appropriate to establish measures for ground poultry until it has revised Salmonella standards for it; similarly, FSIS believes it is not appropriate to establish measures for Campylobacter until it has established plant compliance categories for Campylobacter in young chicken and turkey carcasses. According to agency officials, revised standards will be proposed and plant compliance categories for Campylobacter established by the end of 2014. FSIS officials told us that a performance measure is not necessary for Salmonella in young turkey carcasses because young turkey plants are meeting the standard, but an agency report on pathogen testing results from the fourth quarter of calendar year 2013 indicated that not all turkey slaughter plants met the standard. FSIS officials told us that all turkey slaughter plants are meeting the standard as of July 2014. As previously mentioned, FSIS is developing a Salmonella standard for raw chicken parts; therefore, the agency has not developed corresponding performance measures and targets. In the absence of performance measures and associated targets for these pathogens in commonly consumed poultry products, FSIS cannot quantitatively assess the effects of its actions related to these standards in meeting the agency’s goal of maximizing domestic compliance with food safety policies and, ultimately, protecting public health. Performance measures and targets are reported in the agency’s strategic and annual plans but, in the absence of such measures and targets, performance information, such as data and trends supplied to FSIS leadership, is not being publicly reported. Without publicly reporting such information, FSIS loses the opportunity to enhance transparency by providing this information to the public and Congress about its progress in meeting this important goal, potentially limiting oversight and accountability. Expeditious development of these measures and targets is particularly important for ground poultry given that these products are a higher risk for contamination than whole carcasses and that the popularity of these products has grown over the years. In addition, USDA faces several challenges that could hinder its ability to reduce contamination in poultry products. For example, practices outside the slaughter plant, such as conditions on poultry farms, can affect contamination of poultry products. To help overcome this challenge, the agency has developed guidelines on practices for controlling Salmonella and Campylobacter on farms, but the guidelines do not include information on the effectiveness for each practice, as recommended by an internal agency committee. FSIS is working with industry, APHIS, and other stakeholders to collect information on on-farm practices to inform future guidelines, but the agency has not confirmed that it would include information on the effectiveness of each on-farm practice. Without providing this information in future revisions of the guidelines, USDA is not fully informing industry of the potential benefits of adopting these practices and encouraging implementation of such practices. We recommend that the Secretary of Agriculture direct the Administrator of the Food Safety and Inspection Service (FSIS) to take the following four actions to help ensure that FSIS efforts protect human health by reducing Salmonella and Campylobacter contamination in FSIS-regulated poultry products: Once FSIS revises its Salmonella standards for ground chicken and ground turkey, the agency should expeditiously develop Salmonella performance measures with associated targets for these products to monitor whether activities to bring plants into compliance with the standards are meeting the agency’s goals. Once FSIS establishes plant compliance categories for Campylobacter in young chicken and turkey carcasses, the agency should expeditiously develop Campylobacter performance measures with associated targets for these products to monitor whether activities to bring plants into compliance with the standards are meeting the agency’s goals. FSIS should expeditiously develop Salmonella performance measures with associated targets for young turkey carcasses to monitor whether activities to bring plants into compliance with the standards are meeting the agency’s goals. In future revisions of the compliance guidelines on controlling Salmonella and Campylobacter, FSIS should ensure the inclusion of information on the effectiveness of each recommended farm practice to reduce these pathogens in live poultry. We provided a draft of this report for review and comment to the Department of Agriculture and Department of Health and Human Services. In written comments, USDA concurred with our four recommendations; USDA’s written comments and our detailed response are presented in appendix III. According to USDA, the agency will establish appropriate measures and targets after collecting adequate data to determine whether establishments are meeting the standards for Salmonella in ground chicken and turkey and in young turkey carcasses, and for Campylobacter in young chicken and turkey carcasses. USDA also stated that it is committed to decreasing the number of Salmonella and Campylobacter illnesses associated with its regulated products, including poultry products, and to using performance standards and performance measures to achieve that reduction. Concerning the agency’s compliance guidelines on controlling Salmonella and Campylobacter, USDA said that it is currently revising the guidelines to address the reduction of Salmonella and Campylobacter in live poultry and will include all available scientific information on the effectiveness of each recommended farm practice to reduce Salmonella in live poultry. USDA also provided technical comments, as did the Department of Health and Human Services, and we incorporated those comments as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of Agriculture, the Secretary of Health and Human Services, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. This report responds to your request that we review U.S. Department of Agriculture’s (USDA) approach to protecting human health by reducing Salmonella and Campylobacter contamination in poultry products. Our objectives for this report were to: (1) describe actions USDA has taken since 2006 to reduce Salmonella and Campylobacter contamination in poultry products that it regulates; (2) evaluate USDA’s efforts to assess the effects of these actions on the incidence of human illnesses from Salmonella and Campylobacter in poultry products; and (3) determine what challenges, if any, USDA faces in reducing these pathogens in poultry products. To describe actions USDA has taken since 2006 to reduce Salmonella and Campylobacter contamination in poultry products that it regulates, we reviewed USDA regulations and documentation on actions taken from February 2006 to June 2014, including Federal Register notices, USDA’s Food Safety and Inspection Service (FSIS) notices, directives, and the December 2013 Salmonella Action Plan. We chose the time frame because FSIS introduced several initiatives in 2006 to reduce Salmonella. We also reviewed FSIS budget documents from fiscal year 2006 to fiscal year 2015, including the agency’s budget explanatory notes. It is not clear how much FSIS has spent or will continue to spend on these actions because our review of FSIS budget documents from fiscal year 2006 to fiscal year 2015 found that the agency does not provide cost data on specific poultry pathogen reduction actions, and the agency was unable to provide such data other than $2.5 million spent on poultry-related sampling programs. We interviewed officials at FSIS headquarters and district offices about recent actions taken to directly address Salmonella and Campylobacter contamination of poultry products including actions identified in Federal Register notices. To evaluate USDA’s efforts to assess the effects of actions taken on the incidence of human illnesses from Salmonella and Campylobacter in poultry products, we reviewed USDA and FSIS strategic plans, as well as USDA annual performance reports to identify relevant performance measures, targets, and goals. We also reviewed FSIS quarterly progress reports on Salmonella and Campylobacter testing of poultry products. We evaluated the source data, statistical methodology, and results of FSIS research articles and a study to determine whether the conclusions drawn on the effects of agency actions were adequately supported by the evidence. We interviewed FSIS headquarters officials about the effects of actions taken on the incidence of human illnesses from these pathogens. We reviewed FSIS, Centers for Disease Control and Protection (CDC), and Food and Drug Administration (FDA) interagency group (Interagency Food Safety Analytics Collaboration) documentation to describe the purpose of the group, as well as completed and planned analytic projects. We also interviewed FSIS and CDC officials about the group’s projects to improve food attribution data. To determine any challenges USDA faces to reduce Salmonella and Campylobacter in poultry products it regulates, we reviewed FSIS, USDA Office of Inspector General, and National Academy of Sciences reports on FSIS’s inspections and management challenges. We also interviewed officials from USDA’s Animal and Plant Health Inspection Service (APHIS),USDA. Additionally, we conducted interviews with stakeholder groups. FSIS, and CDC to identify and describe any challenges facing We conducted a two-stage interview process with industry, consumer, and government employee stakeholder groups. We selected an initial set of 12 stakeholder groups identified based on our previous experience with large national groups with food safety and slaughter inspection knowledge from our August 2013 report on poultry and hog inspections. These stakeholder groups were the American Meat Institute, the Center for Foodborne Illness Research and Prevention, the Center for Science in the Public Interest, the Consumer Federation of American, Food and Water Watch, the Government Accountability Project, the National Association of Federal Veterinarians, the National Chicken Council, the National Turkey Federation, the North American Meat Association and the Pew Charitable Trusts. We conducted an initial interview with the American Federation of Government Employees/National Joint Council of Food Inspection Locals to identify potential challenges, but we were unsuccessful in obtaining a subsequent structured interview. The sample captures a broad range of major stakeholder groups but, because it is a nongeneralizable sample, it is possible that this group did not include opinions that some experts on the topic may have. During the first round, we conducted exploratory interviews. We analyzed the results by identifying common challenges. During the second round, we conducted structured interviews using questions that covered the most common potential challenges cited in the exploratory interviews. We list the five key structured interview questions we are reporting on that stakeholders previously identified as challenges and potential challenges during exploratory interviews. For questions listed below, respondents had the following choices as responses: definitely yes, probably yes, definitely no, probably no, or don’t know. There were several questions that would have required respondents to have in-depth information about FSIS, such as details about the technology the agency uses for identifying specific serotypes or specifics about how the agency would implement actions from its Salmonella Action Plan. Lacking this information, many respondents were only able to give us qualified answers with caveats. Therefore, we concluded that these responses were not standardized to be reported, and we excluded them from our report. FSIS sets performance standards for Salmonella and Campylobacter. Do you think FSIS has sufficient authority to ensure that poultry plants comply with FSIS performance standards? FSIS has declared some pathogen strains adulterants, such as E-coli 0157. Do you think it is necessary for FSIS to declare some strains of Salmonella, particularly strains more likely to cause severe illness, as adulterants in order to meet its Salmonella public health goal? Currently, poultry plants decide whether Salmonella and Campylobacter are hazards reasonably likely to occur in their HACCP. Do you think this approach is adequate to keep contamination at FSIS regulated plants within FSIS performance standards? Verification Testing and Sampling FSIS recently evaluated its pathogen sampling program and assessed that it could not measure prevalence of Salmonella in poultry products over time, in part because it does not conduct random sampling. Do you think FSIS needs estimates of prevalence of Salmonella in poultry products to set its Salmonella public health goal? Salmonella is known to be ubiquitous and persistent in the natural environment. Do you think these traits make it difficult to control Salmonella in poultry products FSIS regulates? To make sure that our results were presented in an accurate and balanced manner, we evaluated responses in terms of the extent of agreement among the three stakeholder groups. Where there was stark disagreement among the groups, we presented the results separately. Where there was general agreement, we reported overall results. We conducted 11 structured phone interviews from March 2014 to May 2014. We selected 4 of the 11 stakeholder groups and pretested the initial structured interview questions to ensure that the questions were relevant and clearly stated; based on those results, we made adjustments to the structured interview as necessary. Apart from the stakeholder interviews, we also spoke with two academic food safety researchers we identified based on academic work and participation in a 2014 food safety conference and discussed challenges related to controlling Salmonella and Campylobacter in poultry products. Additionally, we reviewed five Salmonella and Campylobacter outbreaks attributed to poultry products with the highest number of illnesses to better understand any challenges that may have contributed to the outbreaks; these outbreaks started during the time period from fiscal year 2011 through fiscal year 2013. The case study included four Salmonella outbreaks from poultry products and one Campylobacter outbreak from poultry products. These outbreaks are not generalizeable to all outbreaks from Salmonella and Campylobacter contaminated poultry products. The selected outbreaks provided illustrative examples of challenges USDA faces. To describe the selected outbreaks, we interviewed state officials from departments of health or agriculture from California, New York, Texas, Vermont, and Washington. These states had the highest number of illnesses for each outbreak in our review. We also interviewed CDC officials with knowledge of the outbreaks and FSIS officials familiar with each outbreak investigation to learn about any challenges USDA faced and other challenges related to investigating outbreaks. We also reviewed states’, CDC, and FSIS documentation on each outbreak to describe each outbreak. For our three objectives, we visited poultry plants in California to gain a better understanding of poultry plant operations and FSIS inspection activities. We chose California because the state has a number of small and large chicken and turkey plants. California also produces a high volume of chicken and turkey in the United States. We reviewed prior GAO reports on food safety, surveillance systems, and performance management. We conducted this performance audit from July 2013 to September 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. During the course of our review, we examined a nongeneralizable sample of recent Salmonella and Campylobacter outbreaks linked to poultry products since 2011. We limited the scope of our review to four of the six most recent Salmonella outbreaks that had the highest number of confirmed illnesses that the Centers for Disease Control and Prevention (CDC) and the U.S. Department of Agriculture’s Food Safety and Inspection Service (FSIS) investigated since 2009. In addition, we also identified one Campylobacter outbreak that occurred within the time period established for those Salmonella outbreaks included in our review. As this was a review of a nongeneralizable sample of outbreaks, the information is not generalizable to all outbreaks but provides important illustrative information. In addition to the information provided for each of the five outbreaks in the summary tables below (see tables 3-7), other findings from our review included the following: Each plant linked to the four Salmonella outbreaks included in our review was a Category 1 plant at the time the outbreaks occurred, meaning that the plants were considered by FSIS to be demonstrating consistent process controls to meet the agency’s existing Salmonella standards. During the five outbreaks, FSIS requested that one company conduct a voluntary recall. FSIS made the request after the agency and CDC linked a person sickened by an outbreak strain of Salmonella Heidelberg to chicken produced by that company and collected from the ill person’s home. According to FSIS officials, FSIS did not request voluntary recalls during the other outbreaks for several reasons. First, for two of the Salmonella outbreaks, the companies initiated voluntary recalls before FSIS could make a formal request. Second, for the remaining Salmonella outbreak, FSIS did not identify the level of information necessary to request a voluntary recall. Finally, for the Campylobacter outbreak, the company voluntarily elected to cease harvesting and selling of raw chicken livers. During the five outbreaks, FSIS issued two public health alerts. Specifically, FSIS issued an alert for the outbreak of Salmonella linked to ground turkey products and another for the outbreak of Salmonella linked to specific brands of chicken products. FSIS found that each of the plants linked to three of the four Salmonella outbreaks had inadequate Hazard Analysis and Critical Control Point (HACCP) plans. FSIS found that each of the plants linked to three of the four Salmonella outbreaks failed to maintain sanitary conditions and comply with the agency’s regulatory requirements for sanitation. FSIS determined that each of the plants linked to three of the four Salmonella outbreaks had not adequately supported parts of their hazard analyses because they did not identify Salmonella as a food safety hazard reasonably likely to occur during certain production processes. In response to the Campylobacter outbreak, FSIS conducted a food safety assessment and found that the plant failed to correctly identify Campylobacter as a pathogen of concern in its hazard analyses, a noncompliance with FSIS regulations. The following are GAO’s comments on the U.S. Department of Agriculture’s (USDA) letter dated September 15, 2014. 1. USDA commented that, throughout our report, we use the terms “mechanically separated” and “ground” when referring to poultry products and that the agency prefers the term “comminuted” and its standard definition. USDA defines comminuted poultry products as “poultry (chicken or turkey) products that have been ground, mechanically separated, or hand- or mechanically-deboned and further chopped, flaked, minced, or otherwise processed to reduce particle size.” For our report, it is important to distinguish among these products because there are standards for ground poultry products but not for mechanically separated or other products that are included under the umbrella of “comminuted” poultry. 2. USDA commented that our use of data from the Centers for Disease Control and Prevention (CDC) for describing the incidence of Salmonella and Campylobacter is misleading and provides an inaccurate assessment of the burden of Salmonella illnesses specifically attributed to products regulated by USDA’s Food Safety and Inspection Service (FSIS). The purpose of these data in the introduction is to provide general context for our review, not to provide information on illnesses specifically attributed to FSIS-regulated products, which are discussed later in the report. We modified our report to note that CDC’s data for describing the incidence of Salmonella and Campylobacter include illnesses attributed to other sources in addition to FSIS-regulated products. 3. USDA commented that it agrees that there are limitations in the data that it relies upon for measuring illness but stated that the agency uses the best data that are available. We modified our report to state that, according to USDA, the agency is using the best data available. 4. We provided a draft of this report to the Department of Health and Human Services, which includes CDC. We incorporated the agency’s technical comments as appropriate. 5. USDA commented that our statement about an example of limited enforcement authority did not include a new requirement under the agency’s final rule to modernize poultry slaughter inspection. We modified our report to note this new requirement. In addition to the individual named above, Mary Denigan-Macauley (Assistant Director), Carl Barden, Kevin Bray, Mark Braza, Nkenge Gibson, Cynthia Norris, Josephine Ostrander, and Kevin Remondini made key contributions to this report.
USDA is responsible for ensuring the safety of poultry products. The Centers for Disease Control and Prevention (CDC) report the U.S. food supply is one of the safest in the world, yet estimate that Salmonella and Campylobacter contamination in food causes more than 2 million human illnesses per year. Poultry products contaminated with pathogens cause more deaths than any other commodity. GAO was asked to examine USDA's approach to reduce these pathogens in poultry products. GAO's objectives were to (1) describe actions USDA has taken since 2006 to reduce Salmonella and Campylobacter contamination in poultry products, (2) evaluate USDA's efforts to assess the effects of these actions on the incidence of human illnesses from Salmonella and Campylobacter in poultry products, and (3) determine challenges USDA faces in reducing these pathogens in poultry products. GAO reviewed relevant regulations and documents and interviewed officials from USDA and CDC, as well as 11 industry, consumer, and government employee stakeholder groups selected based on knowledge of USDA's poultry slaughter inspections and food safety. Since 2006, the U.S. Department of Agriculture (USDA) has taken a number of actions to reduce contamination from Salmonella and Campylobacter (disease-causing organisms, i.e., pathogens) in poultry (chicken and turkey) products. USDA's actions to reduce these pathogens include, for example, tightening existing standards limiting the allowable amount of Salmonella contamination in young poultry carcasses, implementing the first standards limiting Campylobacter contamination in young poultry carcasses in 2011, and developing an action plan detailing a priority list of actions, such as developing new enforcement strategies, to reduce Salmonella . More recently, in August 2014, USDA published its final rule to modernize poultry slaughter inspections, which according to the agency, will play a role in reducing Salmonella and other poultry pathogen contamination by allowing better use of agency resources, among other things. To help assess the effects of these actions on the incidence of human illness from Salmonella and Campylobacter , USDA conducted research on the effects of agency actions to reduce these pathogens and developed performance measures for certain poultry products to help monitor progress toward agency goals. For example, USDA developed a measure to indicate whether agency actions to ensure compliance with the standard for Salmonella contamination in young chicken carcasses are helping the agency achieve its goal of maximizing domestic compliance with food safety policies. However, USDA has not developed measures for Salmonella contamination in ground poultry or young turkey carcasses, even though standards for such contamination have been in place since 1996 and 2005, respectively, or for Campylobacter contamination in young poultry carcasses. USDA believes it is not appropriate to establish measures for ground poultry until the agency has revised standards, or for Campylobacte r contamination until the agency has obtained more information on compliance levels—both of which the agency expects to do by the end of 2014. USDA officials stated that they will review the agency's strategic plan to determine what performance measures, if any, are needed. USDA does not believe a measure for young turkey carcasses is needed since historically data have shown that plants are meeting the standard but, in calendar year 2013, two plants did not meet it; USDA officials told GAO that these plants are no longer noncompliant. Without performance measures for these standards, USDA is not publicly reporting performance information and cannot assess the effects of its actions related to these standards in meeting the goal of maximizing domestic compliance with food safety policies and, ultimately, protecting public health. GAO identified several challenges—based, in part, on the views of 11 stakeholder groups—that could hinder USDA's ability to reduce contamination in poultry products. For example, contamination of poultry products can be affected by practices on poultry farms. To help overcome this challenge, the agency developed guidelines in 2010 on practices for controlling Salmonella and Campylobacter on farms, but the guidelines did not include information on the effectiveness of each of these practices, consistent with a recommendation from an agency advisory committee. USDA did not confirm that it plans to include this information in future guidelines. Without providing this information in future guidelines, USDA is not fully informing the poultry industry of the potential benefits of adopting these practices and encouraging their implementation. Salmonella and Campylobacter on farms include information on the effectiveness of each practice. USDA agreed with GAO's recommendations.
The DI program was established in 1956 to provide monthly cash benefits to individuals who were unable to work because of severe long-term disability. In fiscal year 2003, SSA paid about $70 billion to 7.5 million disabled workers, their spouses, and dependents, with average monthly cash benefits of about $723 per beneficiary. To be eligible for benefits, individuals with disabilities must have a specified number of recent work credits under Social Security when they first became disabled. Individuals may also be able to qualify based on the work record of a deceased, retired, or disabled parent, or a deceased spouse. Benefits are financed by payroll taxes paid into the Federal Disability Insurance Trust Fund by covered workers and their employers, based on the worker’s earnings history. To meet the definition of disability under the DI program, an individual must have a medically determinable physical or mental impairment that (1) has lasted or is expected to last at least 1 year or to result in death and (2) prevents the individual from engaging in substantial gainful activity. Individuals are engaged in SGA if they have earnings above $810 per month in calendar year 2004. Program guidelines require DI beneficiaries to report their earnings to SSA in a timely manner in order to ensure that they remain eligible for benefits. SSA conducts work issue CDRs to determine if beneficiaries are working above the SGA level. SSA initiates a work CDR only after the beneficiary has completed a 9-month trial work period, during which the beneficiary is allowed to earn more than the SGA level without affecting their eligibility for benefits. The trial work period is one of several provisions in the DI program intended to encourage beneficiaries to return to work. The trial work period begins with the first month a beneficiary is eligible for DI benefits. Once the trial work period is completed, beneficiaries are generally ineligible for future DI benefits unless their earnings fall below the SGA level. Work CDRs are triggered by several types of events, although most are generated by SSA’s Continuing Disability Review Enforcement Operation (enforcement operation). This process involves periodic computer matches between SSA’s administrative data and IRS wage data. The enforcement operation generates notices for cases that exceed specified earnings thresholds, which are forwarded to 1 of 8 program service centers for additional examination. The cases at each program service center are then temporarily housed in a central repository (called the computer output section) and are released to “earnings reviewers” for development of work activities. Cases are generally released for development on a first-in-first-out basis, based on how long they have been in the central repository, and according to staff workloads. After initial review, cases for which individuals may require cessation of benefits are generally forwarded to a “disability processing specialist” for additional development. Work CDRs can also be triggered by other events. For example, SSA requires beneficiaries to undergo periodic medical examinations to assess whether they continue to be physically disabled. During such reviews, Disability Determination Service staff sometimes discover evidence that indicates the beneficiary may be working and usually forwards the case to an SSA field office or program service center for earnings/work development. Additional events that may trigger a work CDR include reports from state vocational rehabilitation agencies, other federal agencies, and anonymous tips. Finally, DI beneficiaries may voluntarily report their earnings to SSA by visiting an SSA field office, or calling the agency’s toll free “800” number. Several SSA components are involved in processing work CDRs. While most are initially sent to SSA’s program service centers as a result of the enforcement operation, some cases are referred to any one of SSA’s more than 1,300 field offices for more in-depth development. Field offices also tend to be the focal points for work CDRs generated by events other than the enforcement operation. Work CDRs can entail labor-intensive, time- consuming procedures such as reviewing folders, performing in-person interviews, and contacting beneficiaries and their employers to verify their monthly earnings. Staff are also required to take into consideration several complex work incentive provisions when calculating whether earnings exceed SGA. In addition, staff—particularly in SSA field offices—are also required to balance numerous competing workloads, including processing initial claims, serving individuals who walk into the field office without an appointment, meeting with beneficiaries who have requested an appointment, and processing a “special disability workload.” DI overpayment detections increased from about $772 million to about $990 million between fiscal years 1999 and 2003. These overpayments included a substantial amount due to beneficiaries who worked and earned more than SGA. Our analysis of available overpayment data shows that, on average, beneficiaries with earnings over program guidelines constitute about 31 percent of all DI overpayments. These overpayments also contributed to mounting financial losses in the DI program. Total overpayment debt increased from about $1.9 billion to nearly $3 billion from fiscal years 1999 to 2003. SSA overpayment collections increased from about $269 million to about $431 million during the same period. However, our analysis shows that waivers and write-offs also increased during this period. Total DI overpayment detections increased from about $772 million to about $990 million between fiscal year 1999 to 2003 (see fig. 1) including a substantial proportion due to beneficiary earnings. On the basis of data in a recent study from SSA, we calculated that overpayments attributable to work and earnings averaged about 31 percent of all DI overpayments annually between 1999 and 2002. We consulted SSA officials about our calculations to determine if they were accurate. These officials agreed that the estimate is generally accurate based on limited available data, but likely understates the true extent of the problem. In particular, SSA officials acknowledged that their study only examined beneficiaries who had their benefits suspended or terminated following a work CDR; it did not consider individuals who may have been overpaid but continued to receive benefits. A beneficiary may be overpaid, but not placed in suspended or terminated status because (1) SSA waived the overpayment, (2) the case was still being processed, or (3) the individual became unemployed and returned to the DI rolls. Our review identified several such cases in numerous field offices. For example, one case we examined involved a beneficiary who was selected for review by the enforcement operation every year from 1998 to 2001. Other than notations on the individual’s account that the case was selected for review, there was no evidence that a work CDR was ever conducted. In February 2003, program service center staff transferred the case to a field office to have the recipient’s earnings reviewed. However, field office staff were unable to contact the recipient and the case was transferred back to the program service center in August 2003. As of March 2004, the case was still being reviewed and waiting final SSA action. SSA officials told us that this individual should have had an overpayment listed for the time between December 1999 and September 2001. However, at the time of our review, no overpayment had yet been established and, therefore did not appear in SSA’s overpayment detection data for those years. Ultimately, we estimate that this case will likely result in a $64,000 overpayment once it is fully developed and completed. The increase in DI overpayments from 1999 to 2003 has contributed to mounting financial losses in the program. Total DI overpayment debt increased from about $1.9 billion in 1999 to nearly $3 billion in 2003. During this same period, SSA’s overpayment collections increased from about $269 million to about $431 million. Agency officials attributed the increase in collections in part to new initiatives they have made use of. For example, SSA has conducted debt management workshops to (1) develop new ideas on collecting the agency’s mounting outstanding debt and (2) identify and prioritize debt that the agency should concentrate on collecting. In addition, SSA is in the process of developing new collection tools, such as wage garnishment to recoup overpayments, and has published final regulations to implement this tool. However, these improvements notwithstanding, the total overpayment debt is increasing. (See fig. 2.) Increases in waivers and write-offs during this period have also contributed, in part, to the DI program’s growing overpayment debt. SSA must waive collection of an overpayment if SSA determines that the beneficiary was not at fault in causing the overpayment and either the beneficiary would be financially unable to repay the overpayment or recovery would be against equity and good conscience. The agency may also write-off overpayments for various reasons, including when the agency is unable to locate an individual for a prolonged period of time. Waivers and write-offs increased from about $222 million in 1999 to about $325 million in 2003. The increase in waivers and write-offs is attributable, in part, to increases in total program outlays during this period. Ultimately, our review suggests that overpayments not only contribute to increasing overpayment debt, but also may be a disincentive for individuals with disabilities to return to work. In particular, the potential of having to repay a large overpayment may discourage some beneficiaries from continuing to work, thus running contrary to SSA’s goal of helping such individuals become self-sufficient. SSA’s ability to detect and prevent earnings-related overpayments is hindered by a lack of timely wage data, inefficient processes for conducting work CDRs, and potentially inaccurate management information. First, the earnings data produced by the enforcement operation are typically 12-18 months old when SSA first receives it, thus making some overpayments inevitable. Second, SSA lacks the means to systematically screen and identify beneficiaries most likely to incur large overpayments. Moreover, even if such a screen existed, SSA currently lacks an automated alert mechanism for notifying its field office and program service center staff about such cases. Third, SSA relies on management information data that may not accurately reflect the age of work CDR cases—the time it actually takes to review and complete them. Inaccurate management data can impede the agency’s ability to effectively monitor program activities and make corrections, when necessary. These weaknesses may contribute to some cases becoming old and resulting in large overpayments. We identified several cases in which as much as 7 years had passed between the point at which the case was initially selected for development and the time it was completed. SSA currently relies on outdated information to verify DI beneficiaries’ eligibility for benefits. The agency conducts periodic matches between its earnings records and IRS wage data to determine if beneficiaries have earnings above the SGA level. The Continuing Disability Review Enforcement Operation (enforcement operation) is generally conducted three times annually—a principal match in May, and two supplemental matches in August, and February of the following year. According to some SSA officials, earnings data from the enforcement operation are generally about 12-18 months old by the time the cases are selected for review and arrive in the program service center. SSA officials told us that the age of the earnings data impedes the agency’s ability to effectively detect potential overpayments in a timely manner. Moreover, because a substantial proportion of all work CDRs in any given year are generated by these enforcement matches, a large proportion of this workload is dependent on outdated earnings information. Thus, some cases with potentially large overpayments may not be detected for extended periods of time. SSA lacks access to more timely sources of wage data for verifying DI beneficiaries’ earnings, such as the Office of Child Support Enforcement’s National Directory of New Hires (NDNH). This database contains quarterly state wage and new hires data that could be used to help evaluate beneficiaries’ continuing eligibility for benefits more quickly than the enforcement operation. While SSA currently uses this database to periodically monitor the earnings of SSI recipients, it lacks similar authority for the DI program. In particular, SSA currently lacks the authority to conduct “batch file” computer matches with the NDNH—- similar to the types of matches it routinely uses to verify SSI recipients’ continuing eligibility for benefits. Although the agency recently obtained “online access” to the NDNH for the DI program, this type of access only allows SSA to obtain wage data on case-by-case basis; it does not permit the agency to systematically match all DI beneficiaries against the NDNH to identify those with high levels of earnings—a potentially valuable, cost- effective means of identifying beneficiaries who may be at risk for large overpayments. The agency lacks the means to identify beneficiaries who are most likely to incur large overpayments. SSA currently uses the enforcement operation to select individuals with more than $4,860 in annual earnings for a work CDR. While periodic computer matches with the NDNH would help provide more timely, comprehensive earnings data to SSA, some SSA officials told us that the agency would still need the ability to systematically screen the cases to identify those at high-risk for large overpayments. The agency currently uses a screen for its medical CDR reviews, which helps the agency identify beneficiaries who are most (or least) likely to have medically improved. This screen helps SSA prioritize the use of limited staff resources by scheduling beneficiaries who are identified as least likely to improve for less frequent medical CDRs, and using forms that are periodically mailed to them requesting information on their medical condition. While our prior work has identified some problems with this screening mechanism, in general SSA believes that, in many instances, it helps the agency mitigate the need for costly, time- consuming medical examinations that may not be necessary. However, the agency does not currently have a similar tool for its work CDRs to identify beneficiaries with high levels of earnings or other characteristics that may contribute to large overpayments. One program service center we visited is considering the use of a screen that would give higher priority to developing cases for beneficiaries with higher earnings, and thus the potential for larger overpayments. Some SSA officials we interviewed told us that such a screen would help the agency prioritize this workload and make better use of limited resources, particularly in field offices where staff are often constrained by several competing workloads, such as processing initial claims. Further, one official in this program service center told us some of the other program centers were considering implementing this screen. Even if a screen existed that would allow SSA to identify cases with the greatest potential for large overpayments, the agency still lacks a timely alert mechanism to notify field offices and program service centers about such cases. According to some SSA officials, such a mechanism, if created, could allow the agency to quickly notify field offices and program service centers about cases that have been identified as high-priority for work CDRs. SSA currently uses an alert mechanism in its SSI program to rapidly notify field offices about recipients with high levels of earnings or other factors that may affect their eligibility for benefits. These alerts are generated centrally from SSA’s match with the NDNH and sent electronically to field office staff, telling them which recipients should have their cases reviewed. However, a similar alert system does not currently exist in the DI program. Instead, SSA field offices rely on daily workload management listings of potential work CDR cases that are relayed via existing agency systems. These lists summarize the cases that are awaiting review, including the “age” of the case. On the basis of such lists obtained from several field offices, we found that half of the cases were at least 117 days old. Moreover, cases that were transferred from program service centers were generally older—some were listed as being 999 days old. In addition, because the data field for measuring the age of cases on the workload management lists only holds a maximum of 3 characters, SSA officials told us that these cases were likely older than indicated on the lists. In addition, we found that these lists do not allow managers to identify cases with the greatest potential for overpayments. As a result, staff generally review cases as they are released by managers to be developed. While some managers and staff we interviewed told us that they make a concerted effort to review the oldest cases first, others told us that they generally process the cases on a “first-in-first-out” basis, which is not necessarily related to the age of the case. Our review suggests that SSA relies on potentially inaccurate data to manage its work CDR workload. In particular, our work shows that high- level management information data on the age of work CDR cases may not accurately reflect the true age of cases (i.e. the actual time it took to complete these cases) and may result in cases being counted more than once, thus distorting the information that SSA relies on to measure the number of cases that are reviewed and completed. To test the accuracy of high-level management data for this workload, we conducted an in-depth examination of 71 randomly selected cases that were “cleared” from SSA’s Processing Center Action Control System (the system) during a 1- week period in April 2004. On the basis of our sample we estimate that, overall, 49 percent of these cases were improperly cleared from this system. This means that the cases were listed as having been fully reviewed and completed, when in fact they still required additional development. Improperly cleared cases can have several negative impacts according to SSA officials, including the potential for contributing to large overpayments. For example, on the basis of our sample, we estimate that 13 percent of the cases were improperly cleared from the system because they were not fully developed and did not have a “diary” attached to them—an automated notice that reminds staff to review the case after a specified period of time. Such cases might not be selected for review again until the next enforcement match—which could be as much as 1 year— and result in overpayments if the beneficiary had earnings that exceeded the SGA level. Most importantly, because they did not have a diary, SSA did not have any way of monitoring these cases or ensuring that they were properly completed. This weakness may partially explain the type of old cases with large overpayments we identified in several SSA field offices. In addition to the cases without any diary, an estimated 37 percent of the cases were incorrectly shown as cleared while still being developed in various locations such as field offices. Although these cases had a diary— thus giving SSA some level of internal control over them—our analysis shows they could still result in management information that does not show the true age of work CDR cases. More specifically, these types of cases would likely result in management information that understates the true age of such cases, and would distort the overall measurement of progress in handling work CDR workloads. Such cases could also result in the double counting of work CDRs. For example, if a single case was cleared in the program center and subsequently developed and cleared in a field office, it could be incorrectly listed as two separate work CDRs. SSA officials also acknowledged that some of the cases we reviewed which showed indications of being cleared multiple times could result in their being counted as numerous separate work CDRs. Thus, existing high-level management data may not accurately capture how many work CDRs have actually been completed. We also found that SSA does not currently have the capability to track the disposition of work CDR cases. For example, the agency is unable to systematically track how many work CDR cases involve overpayments. Because it lacks sufficient management information data on this workload, the agency also does not have performance goals for work CDRs similar to measures it maintains for its medical CDR workloads, such as the number of work CDRs that should be completed each year. Moreover, given the problems we identified with potential double or multiple counting of work CDR cases, it is unclear whether SSA could establish meaningful performance goals at this time. The vulnerabilities we identified have likely contributed to old work CDR cases and large earnings-related overpayments in the DI program. We identified several examples of cases that took years to develop and complete. Some of these cases were as much as 7 years old and involved large overpayments. The following are examples of some of these cases: One case we observed was initially selected for review in 1997 by the enforcement operation. Although the recipient’s benefits should have been discontinued in 1997 according to SSA officials, payments continued until March 2000. Agency officials could not explain why no action was taken on this case between 1997 and 2000. As a result, this beneficiary incurred a $28,000 overpayment. Another case was selected for review each year from 1997 to 2001 by the enforcement operation. However, there was no evidence in the file that a work CDR was conducted until February 2004, and SSA officials were unable to explain why no action was taken after several consecutive enforcement matches. This beneficiary incurred an estimated $105,000 overpayment between April 1997 and December 2003. Another case involved a beneficiary who had earnings well above SGA in 1998 when they first became eligible for DI benefits. However, the case only arrived in the field office for action in March 2003. SSA discontinued the recipient’s benefits in 2003, but SSA officials could not explain why the case took 5 years to arrive at the field office for action. As a result, the recipient incurred a $32,000 overpayment. An additional case we identified involved a beneficiary with earnings well above SGA for several years and who incurred a prior earnings- related overpayment. SSA subsequently waived the overpayment. However, the recipient continued to work without reporting the earnings to SSA. The agency eventually discontinued the individual’s benefits in September 2003. At the time of our review, SSA officials estimated that the beneficiary had incurred a $102,000 overpayment. Further compounding the vulnerabilities that contribute to aged cases and large overpayments, our review suggests that SSA has difficulty balancing competing workloads. In particular, SSA field office staff are required to perform numerous duties, including processing initial claims, serving individuals who walk into the field office without an appointment, meeting with beneficiaries who have requested an appointment, and processing the “special disability workload.” Many managers and staff we interviewed told us that work CDRs generally receive lower priority than some of these other activities, such as processing initial claims. In several offices we visited, we observed lists of pending work CDRs, sometimes stored in file cabinets for extended periods of time. SSA is currently implementing a new automated system that may address some of the vulnerabilities we identified. This system, called “eWork,” is intended to simplify how SSA manages and processes its disability cases. In particular, according to documentation provided by SSA, this system will establish program controls for all work CDR cases and help the agency identify higher priority cases. Once fully implemented, eWork will combine data from several different SSA databases and will automate the processing of numerous forms commonly used in developing and documenting disability cases, according to SSA. One field office we visited was piloting this system. Management and staff in this office generally reported that the system was an improvement over existing systems. In particular, officials reported that the system was useful in helping them track the age of work CDR cases, especially older cases that should potentially receive higher priority. Overall, SSA management and line staff expressed confidence that this new system will improve the agency’s ability manage its disability cases, including work CDRs. However, because the system is new and is not yet fully implemented nationwide, we were unable to evaluate how effective it may be for addressing some of the weaknesses we identified. We recognize that ensuring program integrity while focusing on the important goal of returning individuals with disabilities to work presents challenges for SSA. However, the weaknesses we identified in SSA’s existing work CDR processes continue to expose the program to overpayments and abuse. In particular, SSA’s reliance on outdated earnings information has contributed to overpayments and forced staff to investigate cases that are old and thus difficult and time-consuming to process. Without the ability to conduct batch file computer matches with the National Directory of New Hires, the agency will remain vulnerable to large earnings-related overpayments. Similarly, the lack of a screen to systematically identify beneficiaries more likely to incur overpayments means that SSA cannot target cases that should receive higher priority. Even if such a screen existed, SSA would not be able to make the best use of it given the lack of an automated alert system to notify field offices and program service centers about which cases should be reviewed. Moreover, without accurate, reliable management data on the age and status of work CDR cases, SSA will find it difficult to effectively monitor this workload, identify areas that require continued improvement, and develop meaningful work performance measures. In an environment of limited budgetary and staff resources, federal agencies such as SSA will be required to take a more strategic approach to servicing ever-increasing workloads. The magnitude of earnings-related overpayments indicates that SSA should take additional steps to strengthen DI program integrity. Moreover, the potential of having to repay a large overpayment may discourage some beneficiaries from continuing to work, thus working contrary to SSA’s goal of helping individuals become self-sufficient. Ultimately, without a concerted effort to increase management focus on this key workload and to reengineer existing processes, SSA’s ability to ensure that trust fund dollars are protected and reserved for those who are truly eligible will continue to be compromised. The new automated system that SSA is developing may help the agency address some of the weaknesses we identified, but it is too early to determine how effective it will be. A conscious management decision to use this system to improve DI program integrity in conjunction with more accurate management information will be required to help detect and prevent large overpayments. To enhance SSA’s ability to detect and prevent overpayments in the DI program, we recommend that the Commissioner of Social Security take the following actions to improve the agency’s work CDR processes: 1. Initiate action to develop a data sharing agreement with the Office of Child Support Enforcement to conduct batch-file periodic computer matches with the National Directory of New Hires (NDNH). Such matches would provide SSA with more timely data to help the agency systematically identify DI beneficiaries who are most likely to incur overpayments. Such a tool could also allow SSA to perform a one-time, comprehensive match against all DI beneficiary records to identify individuals who may be overpaid but have not yet been detected. 2. Consider developing an enhanced screening mechanism that would enable the agency to more effectively identify DI beneficiaries who are most likely to incur earnings-related overpayments. This would help the agency make more efficient use of limited staff and budgetary resources. 3. Study the potential for creating an alert system similar to that used in the SSI program for alerting field offices about recipients at high risk for earnings-related overpayments. Such a system would allow SSA to notify field offices and program service centers about beneficiaries the agency identifies as most likely to incur large overpayments. 4. Consider ways to improve the accuracy and usefulness of existing management information data. Improvements may include modifying how the agency measures the age of work CDR cases to more accurately reflect how long they are in process. 5. Once the eWork system is fully implemented, SSA should consider how it could be used to help the agency create performance goals for its work CDR workload. We provided a draft of this report to SSA for review and comment. SSA agreed with our recommendations and, in some instances, outlined initial plans for their implementation. SSA agreed with our first recommendation to develop a data-sharing agreement with the Office of Child Support Enforcement to conduct batch-file computer matches with the NDNH. The agency noted that it pursued online access to the NDNH first because it was more cost effective and expeditious. The agency also indicated that it is developing a new computer matching agreement that supports SSA’s use of the NDNH in the DI program for purposes of identifying potential overpayments. We encourage SSA to ensure that any new agreement will provide for periodic, batch-file matches to verify beneficiaries’ earnings at regular, specified intervals. SSA also agreed with our second recommendation to consider developing an enhanced screening mechanism to help the agency more effectively identify DI beneficiaries most likely to incur earnings-related overpayments. In particular, SSA agreed that it should pursue a screening system similar to that currently used for medical CDRs to determine if there is an increased likelihood of earnings-related overpayments based on particular diagnosis codes. It also noted that it should study ways to improve the effectiveness of existing systems (such as the Continuing Disability Review Enforcement Operation and the Disability Control File) to help the agency focus on beneficiaries with the greatest potential for overpayments. We agree that these are positive steps and that the agency should consider how improvements to such systems might be incorporated to emerging systems such as “eWork”. With respect to our third recommendation that SSA develop an alert system similar to that currently used in the SSI program for alerting staff to cases at risk for earnings-related overpayments, SSA agreed and noted that an alert system such as the “S2” alert used for SSI wage discrepancies could provide a useful model. The agency noted that such an alert could reduce the amount of time in which a claimant would continue to receive payments while work development is initiated. Moreover, since these reports include the employer’s name, address, and a quarterly breakdown of the beneficiary’s earnings, this detailed information would provide SSA staff with more specific information than is currently available. SSA also said that an alert system could also be generated from the NDNH match proposed in our first recommendation. We agree that an alert system would help identify potential overpayments more quickly, particularly if it were generated from data produced by periodic computer matches with the NDNH. SSA agreed with our fourth recommendation to improve the accuracy and usefulness of existing management information data. SSA said that it is working on a plan to unify the manner in which it identifies and counts work and medical CDRs. The agency believes that it will be able to more accurately capture workload counts and employee time consistently, regardless of where the work is performed. While we agree that efforts to improve existing processes and systems are necessary, it is too early to determine if the proposed modifications will address the problems we identified with high level management information, such as potential double-counting of work CDRs. SSA also agreed with our fifth recommendation to consider how it could use the eWork system to create performance goals for work CDRs once it is fully implemented. The agency commented that such a measure would give field offices and program service centers a better indication of what is expected of them regarding processing this workload and would help them balance the time needed to process competing workloads. SSA’s formal comments appear in appendix II. SSA also provided additional technical comments that we have incorporated in the report as appropriate. Unless you publicly announce its contents earlier, we plan no further distribution until 30 days after the date of this report. At that time, we will send copies of this report to the House and Senate Committees with oversight responsibility for the Social Security Administration. We will also make copies available to other parties upon request. In addition, the report will be available at no charge on GAO's Website at http//:www.gao.gov. If you have any questions concerning this report, please contact me at (202) 512-7215. This appendix provides additional details about our analysis of the Disability Insurance (DI) program’s work continuing disability review (work CDR) process, including potential weaknesses in the Social Security Administration’s (SSA) existing procedures and policies. To meet the objectives of the review, we examined DI performance data, prior reports by SSA and its Office of Inspector General (OIG), external research studies, and our prior reviews of the program. We analyzed DI payment data over a 5-year period from 1999 to 2003, and examined cases from 14 out of 18 SSA field offices we visited and 3 program service centers. In addition, we randomly selected and reviewed 71 cases with earnings to determine if they were reviewed and processed in accordance with program guidelines. Finally, we conducted in-depth interviews with 230 management and line staff from SSA’s headquarters; its regional offices in New York and San Francisco; 18 field offices in 6 states; and 3 out of 8 regional program service centers. During our meetings, we (1) examined existing work CDR procedures; (2) documented management and staff views on the effectiveness of SSA’s work CDR processes for detecting and preventing earnings-related overpayments; and (3) discussed potential improvements to existing program processes, systems, and policies. We conducted independent audit work in six states (California, Florida, Maryland, Massachusetts, New York, and Virginia) to examine SSA’s policies and procedures for conducting work CDRs, and to identify any common weaknesses in SSA’s work CDR processes. We selected locations for field visits based on several criteria, including geographic dispersion, states with an SSA program service center, states with large numbers of DI beneficiaries, and states with large DI expenditures. In total, we visited 18 field offices and interviewed 161 SSA field office managers and line staff responsible for the DI program. We visited a mix of large offices in metropolitan areas as well as smaller offices located in the suburbs. In addition, we visited three program service centers in Richmond, California; Queens, New York; and Baltimore, Maryland. These program centers were responsible for the majority of all work CDRs identified by the enforcement operation. Where appropriate, we also visited field offices or program centers that were conducting special initiatives or piloting emerging computer systems that could impact how SSA conducts work CDRs (such as the “eWork” system). During our meetings with SSA and OIG officials, we documented management and staff views on the effectiveness of work CDR policies and procedures and potential improvements to existing processes, policies, and systems. In particular, we documented management and staff views on (1) the timeliness of existing data sources to verify beneficiary earnings, (2) the effectiveness of existing processes for identifying individuals at high-risk for large overpayments, (3) the effectiveness of existing computer systems for notifying staff responsible for conducting work CDRs about cases that should be reviewed, and (4) the accuracy of management information data used to monitor work CDRs in one large program service center. To further assess existing program processes and systems, at 10 offices and 3 service centers, we judgmentally selected between 5 and 7 pending or completed work CDR cases. We generally looked at older cases in order to understand where existing procedures may have weaknesses. We then conducted in-depth reviews of these case files to identify potential vulnerabilities in existing work CDR processes, policies, and systems. As part of our study, we worked with SSA to draw a 1 percent sample of all work CDR cases that were “cleared” from the agency’s Processing Center Action and Control System over a 1-week period in April 2004 (the “study population”). Our objective was to determine whether work CDR cases were cleared in accordance with agency guidelines and to assess the accuracy of high-level management data produced by this system. This sample resulted in a total of 151 cleared cases. We then randomly selected 71 of these 151 cases for review. As part of our review, we discovered that there was a potential for cleared work CDR cases to appear multiple times in SSA’s Processing Center Action Control System. On the basis of our discussion with knowledgeable SSA officials, we determined it would be highly unlikely for cases to be listed as “cleared” multiple times in a 1- week time period. Therefore, we assumed that cases did not appear more than once in the 1-week time period from which we drew our sample. Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples that could have been drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results using 95 percent confidence intervals. A confidence interval is an interval that would contain the actual population value for 95 percent of the samples we could have drawn. As a result, we are 95 percent confident that each of the confidence intervals in this report will include the true values in the study population. For this file review, the margin of error for each percentage estimate does not exceed plus or minus 10 percentage points, unless otherwise noted. The margin of error is the distance from each estimate to the upper or lower boundaries of its 95 percent confidence interval. To assess the reliability of the databases we used, reviewed reports provided by SSA and its Office of Inspector General, which contained recent assessments of these databases. We also interviewed knowledgeable agency officials to further document the reliability of these systems. In addition, we checked the data for internal logic, consistency, and reasonableness. We determined that all the databases were sufficiently reliable for purposes of our review. In addition to those named above, Jeff Bernstein, Sue Bernstein, Dan Schwimer, Salvatore F. Sorbello, Sidney Schwartz, and Shana Wallace made important contributions to this report. Social Security Disability: Reviews of Beneficiaries’ Disability Status Require Continued Attention to Achieve Timeliness and Cost- Effectiveness. GAO-03-662. Washington, D.C.: July 24, 2003. High-Risk Series: An Update. GAO-03-119. Washington, D.C.: January 2003. SSA Disability: Enhanced Procedures and Guidance Could Improve Service and Reduce Overpayments to Concurrent Beneficiaries. GAO-02- 802. Washington, D.C.: September 5, 2002. Social Security Administration: Agency Must Position Itself Now to Meet Profound Challenges. GAO-02-289T. Washington, D.C.: May 2, 2002. Social Security Disability: Disappointing Results from SSA’s Efforts to Improve the Disability Claims Process Warrant Immediate Attention. GAO-02-322. Washington, D.C.: February 27, 2002. Social Security Administration: Status of Achieving Key Outcomes and Addressing Major Management Challenges. GAO-01-778. Washington, D.C.: June 15, 2001. High-Risk Series: An Update. GAO-01-263. Washington, D.C.: January 2001. Major Management Challenges and Program Risks: Social Security Administration. GAO-01-261. Washington, D.C.: January 2001. Social Security: Review of Disability Representatives. GAO/HEHS-99- 50R. Washington, D.C.: March 4, 1999. Major Management Challenges and Program Risks: Social Security Administration. GAO/OCG-99-20. Washington, D.C.: January 1999. High-Risk Program: Information on Selected High-Risk Areas. GAO/HR- 97-30. Washington, D.C.: May 16, 1997. High-Risk Series: An Overview. GAO/HR-97-1. Washington, D.C.: February 1997.
The Social Security Administration's (SSA) Disability Insurance (DI) program is one of the nation's largest cash assistance programs for disabled workers. In fiscal year 2003, the DI program provided about $70 billion in financial assistance to approximately 7.5 million disabled workers, their spouses, and dependent children. This program has grown in recent years and is poised to grow further as the baby boom generation ages. The Senate Committee on Finance asked GAO to (1) determine the amount of overpayments in the DI program, particularly those attributable to earnings or work activity, and (2) identify any vulnerabilities in SSA's processes and policies for verifying earnings that may contribute to work-related overpayments. Overpayment detections in the DI program increased from $772 million in fiscal year 1999 to about $990 million in 2003. The true extent of overpayments resulting from earnings that exceed agency guidelines is currently unknown. Based on available data from SSA, GAO found that about 31 percent of all DI overpayments are attributable to DI beneficiaries who worked and earned more than allowed. Moreover, GAO found that these overpayments contributed to mounting financial losses in the program. From 1999 to 2003, total overpayment debt increased from about $1.9 billion to nearly $3 billion. Three basic weaknesses impede SSA's ability to prevent and detect earnings-related overpayments. First, the agency lacks timely data on beneficiaries' earnings and work activity. Second, SSA uses inefficient processes to perform work continuing disability reviews (work CDRs). Third, the agency relies on potentially inaccurate management information to effectively monitor and oversee some parts of this workload. These weaknesses contributed to some work CDR cases GAO identified that were as much as 7 years old, resulting in potential and established overpayments as large as $105,000 per beneficiary. In addition, GAO found that SSA relies on potentially inaccurate management information to administer its work CDR workload. SSA is developing new automated systems that may potentially address some of these problems and could help the agency balance the important goals of encouraging individuals with disabilities return to work, while also ensuring program integrity. However, it is too early to determine how effective such systems will be.
Since the early 1990s, GSA and the federal judiciary have been carrying out a multibillion-dollar courthouse construction initiative to address the judiciary’s growing needs. In 1993, the judiciary identified 160 court facilities that required either the construction of a new building or a major annex to an existing building. From fiscal year 1993 through fiscal year 2005, Congress appropriated approximately $4.5 billion for 78 courthouse construction projects. Since fiscal year 1996, the judiciary has used a 5­ year plan to prioritize new courthouse construction projects, taking into account a court’s need for space, security concerns, growth in judicial appointments, and any existing operational inefficiencies. The judiciary’s most recent 5-year plan (covering fiscal years 2005 through 2009) identifies 57 needed projects that are expected to cost $3.8 billion. GSA and the judiciary are responsible for managing the multibillion-dollar federal courthouse construction program, which is designed to address the judiciary’s long-term facility needs. The Administrative Office of the United States Courts (AOUSC), the judiciary’s administrative agency, works with the nation’s 94 judicial districts to identify and prioritize needs for new and expanded courthouses. The U.S. Courts Design Guide (Design Guide) specifies the judiciary’s criteria for designing new court facilities and sets the space and design standards that GSA uses for courthouse construction. First published in 1991, the Design Guide has been revised several times to address budgetary considerations, technological advancements, and other issues, and the guide is currently undergoing another revision. GSA provides a range of real property services including maintenance, repairs, alterations, and leasing to numerous federal agencies and the federal judiciary. The Public Buildings Amendments of 1972 made several important revisions to the Federal Property and Administrative Services Act. First, the 1972 law created a new revolving fund, later named FBF. Next, it required agencies that occupy GSA-controlled buildings to pay rent to GSA, which is to be deposited in the revolving fund to be used for GSA real property services. GSA charges rent based on appraisals for facilities it owns and the actual lease amount for facilities it leases on the tenants’ behalf. The legislation also authorized any executive agency other than GSA that provides space and services to charge for the space and services. The rent requirement is intended to reduce costs and encourage more efficient space utilization by making agencies accountable for the space they use. GSA proposes spending from FBF for courthouses as part of the President’s annual budget request to Congress. GSA has been using the judiciary’s 5-year plan for new courthouse projects since fiscal year 1996 to develop requests for both new courthouses and expanded court facilities. GSA also prepares feasibility studies to assess various courthouse construction alternatives and serves as the central point of contact with the judiciary and other stakeholders throughout the construction process. For courthouses that are to be selected for construction, GSA prepares detailed project descriptions called prospectuses that include the justification, location, size, and estimated cost of the new or annexed facility. GSA typically submits two prospectuses to Congress. The first prospectus generally requests authorization and funding to purchase the site and design the building, and the second prospectus generally requests authorization and funding for construction, as well as any additional funding needed for site and design work. Once Congress authorizes and appropriates funds for a project, GSA refines the project budget and selects private-sector firms for the design and construction work. Figure 1 illustrates the process for planning, approving, and constructing a courthouse project. Courthouse projects continue to be costly, and increasing rents and budgetary constraints have given the judiciary further incentive to control its costs. The judiciary pays rent to GSA for the use of the courthouses, which GSA owns, and the proportion of the judiciary’s budget that goes to rent has increased as the judiciary’s space requirements have grown. According to the judiciary, rent currently accounts for just over 20 percent of its operating budget and is expected to increase to over 25 percent of its operating budget in fiscal year 2009, when the rental costs of new court buildings are included. Additionally, in fiscal year 2004, the judiciary faced a budgetary shortfall and, according to the judiciary, reduced its staff by 6 percent. In September 2004, the judiciary announced a 2-year moratorium on new courthouse construction projects as part of an effort to address its increasing operating costs and budgetary constraints. During this moratorium, AOUSC officials said that they plan to reevaluate the courthouse construction program, including reassessing the size and scope of projects in the current 5-year plan, reviewing the Design Guide’s standards, and reviewing the criteria and methodology used to prioritize projects. Judiciary officials also said that they plan to reevaluate their space standards in light of technological advancements and opportunities to share space and administrative services. Our work in the 1990s showed that decision makers within GSA and the judiciary had wide latitude in making choices that significantly affected costs. The judiciary’s 5-year plan did not reflect all of the judiciary’s most urgently needed projects. However, the judiciary has since made some of our recommended changes. We also found that the judiciary did not compile data that would allow it to determine how many and what types of courtrooms it needs. The judiciary concluded that additional data and analysis were not necessary. In 1995, we testified that a primary reason for differences in the construction costs of courthouses was that GSA and the judiciary had wide latitude in making choices about the location, design, construction, and finishes of courthouse projects. These choices were made under circumstances in which budgets or designs were often committed to before requirements were established. In addition, design guidance was flexible, and systematic oversight was limited. As a result, some courthouses had more expensive features than others. While recognizing that some flexibility was needed and that some costly features may be justifiable, we found that the flexibility in the process should have been better managed. We recommended that GSA and AOUSC clearly define the scope of construction projects and refine construction cost estimates before requesting project approval and final funding levels; establish and implement a systematic and ongoing project oversight and evaluation process to compare courthouse projects, identify opportunities for reducing costs, and apply lessons learned to future projects; and establish a mechanism to monitor and assess the use of flexibility within design guidance to better balance choices made about courthouse design, features, and finishes. GSA and the judiciary said that since 1996, they have also taken several actions to improve the courthouse construction program, including developing priority lists of locations needing additional space (the 5-year plan), revising the Design Guide, and placing greater emphasis on cost consciousness in its courthouse construction guidance for GSA. In a 2004 congressional briefing, we reported that GSA had attributed some cost growth in courthouse construction projects to a number of factors, including changes in the scope of the projects. In Buffalo, New York, for example, GSA had to change the scope of the courthouse project and acquire an entirely new site in order to achieve the necessary security­ based setbacks from the street. The judiciary said that funding delays have slowed the progress of the program by creating a backlog of projects, and increased costs by 3 to 4 percent per year because of inflation. The judiciary also indicated that limiting the size of courthouses to stay within budget has resulted in space shortages sooner than expected at some courthouses. In a 2004 report related specifically to a new federal courthouse proposed for Los Angeles, we found that the government will likely incur additional construction and operational costs beyond the $400 million estimated as needed for the new courthouse. Some of these additional costs are attributable to operational inefficiencies. Specifically, the court is split between a new building and an existing courthouse in Los Angeles, both of which will, according to the judiciary, require additional courtrooms to meet the district court’s projected space requirements in 2031. In 1993, we reviewed the long-term planning process used by the judiciary to estimate its space requirements. We found that AOUSC’s process for projecting long-term space requirements did not produce results that were sufficiently reliable to form the basis for congressional authorization and funding approval of new construction and renovation projects for court space. Specifically, three key problems impaired the accuracy and reliability of the judiciary’s projections. First, AOUSC did not treat all districts consistently. For example, the procedure used to convert caseload estimates to staffing requirements did not reflect differences among districts that affect space requirements. Second, according to AOUSC’s assumptions about the relationship between caseloads and staff needs, many district baseline estimates did not reflect the districts’ current space requirements. For example, when a district occupied more space than the caseload warranted, future estimates of needs were overstated. Third, AOUSC’s process did not provide reliable estimates of future space requirements because the methodology used to project caseloads did not use standard acceptable statistical methods. We recommended that AOUSC revise the long-term planning process to increase consistency across regions, establish accurate caseload baselines for each district, and increase the reliability of the projected caseloads by applying an accepted statistical methodology and reducing subjectivity in the process. In May 1994, we testified that the judiciary had implemented some of these recommendations. For example, on the basis of our recommendation, whenever a decision was made to proceed on a particular building project, AOUSC provided GSA with detailed 10-year space requirements for prospectus development and an overall summary of its projected 30-year space requirements for purposes of site planning. In 2001, we reported that since 1994, AOUSC had continued its efforts to improve its long-term planning process in implementing our previous recommendations. Specifically, the judiciary began (1) using an automated computer program that applied Design Guide standards to estimate space requirements, (2) employing a standard statistical forecasting technique to improve caseload projections, and (3) providing GSA with data on its 10-year projected space requirements to support the judiciary’s request for congressional approval of funds to build new facilities. In 1996 we reported that the judiciary had developed a methodology for assessing project urgency and a short-term (5-year) construction plan to communicate its urgent courthouse construction needs. Our analysis suggested that its 5-year plan did not reflect all of the judiciary’s most urgent construction needs. We found that the judiciary, in preparing the 5­ year plan, developed urgency scores for 45 projects, but did not develop urgency scores for other locations that, according to AOUSC, also needed new courthouses. Our analysis of available data on conditions at the 80 other locations showed that 30 of them likely would have had an urgency score higher than some projects in the plan. We recommended that the Director of AOUSC work with the Judicial Conference Committee on Security, Space, and Facilities to make improvements to the 5-year plan, including fully disclosing the relative urgency of all competing projects and articulating the rationale or justification for project priorities, including information on the conditions that are driving urgency—such as specific security concerns or operational inefficiencies. In commenting on the report, AOUSC generally agreed with our recommendations and indicated that many of the improvements we recommended were already under consideration. It also recognized that some courthouse projects, which were currently underway, may have had lower priority scores because the funding had already been provided by the time the priority scores were developed. In 1997, we reported that the judiciary maintains a general practice of, whenever possible, assigning a trial courtroom to each district judge.However, we also noted that the judiciary did not compile data on how often and for what purposes courtrooms are actually used and it did not have analytically based criteria for determining how many and what types of courtrooms are needed. We concluded that the judiciary did not have sufficient data to support its practice of providing a trial courtroom for every district judge. We recommended that the judiciary establish criteria for determining effective courtroom utilization and a mechanism for collecting and analyzing data at a representative number of locations so that trends can be identified over time and better insights obtained on court activity and courtroom usage; design and implement a methodology for capturing and analyzing data on usage, courtroom scheduling, and other factors that may substantially affect the relationship between the availability of courtrooms and judges’ ability to effectively administer justice; use the data and criteria to explore whether the one-judge, one-courtroom practice is needed to promote efficient courtroom management or whether other courtroom assignment alternatives exist; and establish an action plan with time frames for implementing and overseeing these efforts. In 1999, AOUSC contracted for a study of the judiciary’s facilities program to address, among other things, the courtroom-sharing issue and identify ways to improve its space and facility efforts. As part of this study, the contractor analyzed how courtrooms are used, assigned, and shared by judges. We reviewed the courtroom use and sharing portion of this study and concluded, along with others, that the study was not sufficient to resolve the courtroom sharing issue. We recommended that the Director, AOUSC, in conjunction with the Judicial Conference’s Committee on Court Administration and Case Management and Committee on Security and Facilities, design and implement cost-effective research more in line with the recommendations in our 1997 report. We also recommended that AOUSC establish an advisory group made up of interested stakeholders and experts to assist in identifying study objectives, potential methodologies, and reasonable approaches for doing this work. In responding to the report, AOUSC disagreed with our recommendations because it believed the contractor study was sufficient and additional statistical studies would not be productive. In a 2002 report, we found that the judiciary’s policies recognized that senior district judges with reduced caseloads were the most likely candidates to share courtrooms and some active and senior judges were sharing courtrooms in some locations primarily when there were not enough courtrooms for all judges to have their own courtroom. However, because of the judiciary’s belief in the strong relationship between ensured courtroom availability and the administration of justice and the wide discretion given to circuits and districts in determining how and when courtroom sharing may be implemented, we concluded that there would not be a significant amount of courtroom sharing in the foreseeable future, even among senior judges. We have reported over the years that GSA has struggled to address its repair and alteration needs identified in its inventory of owned buildings. In 1989, we found that FBF’s inability to generate sufficient revenue in the past was due, in large part, to restrictions imposed on the amount of rent GSA could charge federal agencies, and we recommended in 1989 that Congress remove all rent restrictions and not mandate any further restrictions. It is also important to note that not all federal property is subject to FBF rent payments because GSA does not control all federal properties. We are currently conducting a review for this committee regarding the issues associated with the judiciary’s request of a $483 million permanent, annual exemption from rent payments to GSA. As part of our series on high-risk issues facing the federal government, we have reported that GSA has struggled over the years to meet the requirements for repairs and alterations identified in its inventory of owned buildings. By 2002, its estimated backlog of repairs had reached $5.7 billion. We have reported that adverse consequences of the backlog included poor health and safety conditions, higher operating costs associated with inefficient building heating and cooling systems, restricted capacity to modernize information technology, and continued structural deterioration resulting from such things as water leaks. We reported that FBF has not historically generated sufficient revenue to address the backlog. On the basis of the work we did in the late 1980s and early 1990s, we concluded that federal agencies’ rent payments provided a relatively stable, predictable source of revenue for FBF, but that this revenue has not been sufficient to finance both growing capital investment needs and the cost of leased space. We found that FBF’s inability to generate sufficient revenue during that time was compounded by restrictions imposed on the amount of rent GSA could charge federal agencies. Congress and OMB had instituted across-the-board rent restrictions that reduced FBF by billions of dollars over several years, and later continued to restrict what GSA could charge some agencies, such as the Departments of Agriculture and Transportation. Because these rent restrictions were a principal reason why FBF has accumulated insufficient money for capital investment, we recommended that Congress remove all rent restrictions and not mandate any further restrictions. According to GSA, most of the restrictions initiated by Congress and OMB have been lifted. However, the GSA Administrator has the authority to grant rent exemptions to agencies. GSA data show that several rent exemptions are currently in place. In general, these exemptions are narrowly focused on a single building or even part of a single building or are granted for a limited duration. Table 2 summarizes the current rent exemptions that exist in GSA buildings, according to data GSA provided. In fiscal year 2006, according to data from GSA, $7.7 billion in expected FBF revenue is projected to come from rent paid by over 60 different federal tenant agencies, such as the Departments of Justice and Homeland Security. Congress sets annual limits on how much FBF revenue can be spent for various activities through the appropriations process. In addition, Congress may appropriate additional amounts for FBF and between fiscal year 1990 and fiscal year 2005, Congress made direct appropriations into FBF for all but 3 fiscal years. This additional funding was not tied directly to any specific projects or types of projects. The statutory language relating to the direct appropriations states that additional amounts are being deposited into FBF for the purposes of the fund. It is also important to note that not all federal property is subject to FBF rent payments. While GSA owns and leases property and provides real estate services for numerous federal agencies, we reported in 2003 that GSA owns only about 6 percent of federal facility space in terms of building floor area. Other agencies, including the Department of Defense (DOD), the U.S. Postal Service, and the Department of Energy have significant amounts of space that they own and control without GSA involvement. In all, over 30 agencies control real property assets. Property­ owning agencies do not pay rent into FBF or receive services from GSA for the space they occupy in the buildings that they own. For example, the Pentagon and military bases are owned by DOD, and national parks facilities are owned by Interior. As a result, these facilities are maintained by DOD and Interior, respectively. In December 2004, the judiciary requested that the GSA Administrator grant a $483 million permanent, annual exemption from rent payments— an amount equal to about 3 times the amount of all other rent exclusions combined. This exemption would equal about half of the judiciary’s $900 million annual rent payment to GSA for occupying space in federal courthouses. The judiciary has expressed concern that the growing proportion of its budget allocated to GSA rent payments is having a negative effect on court operations. According to GSA data, the judiciary increased the owned space it occupies by 15 percent from 2000 to 2004. In February 2005, the GSA Administrator declined the request because GSA considered it unlikely that the agency could replace the lost income with direct appropriations to FBF. In April 2005, this subcommittee requested that we look into issues associated with the judiciary’s request for a permanent, annual exemption from rent payments to GSA. Our objectives for this work are to determine the following: 1. How are rent payments calculated by GSA and planned and accounted for by the judiciary? 2. What changes, if any, has the judiciary experienced in rent payments in recent years? 3. What impact would a permanent rent exemption have on FBF? Our work is still underway, but our past work on related issues shows that rent exemptions have been a principal reason why FBF has accumulated insufficient money for capital investment. We conducted our work for this testimony in June 2005 in accordance with generally accepted government auditing standards. During our work, we reviewed past GAO work on federal real property and courthouse construction issues, analyzed AOUSC and GSA documents, and interviewed AOUSC and GSA officials. Mr. Chairman, this concludes my prepared statement. I would be pleased to respond to any questions that you or the other Members of the Subcommittee may have. For further information about this testimony, please contact me at (202) 512-2834 or [email protected]. Keith Cunningham, Randy De Leon, Maria Edelstein, Bess Eisenstadt, Joe Fradella, Susan Michal-Smith, David Sausville, and Gary Stofko also made key contributions to this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Over the last 20 years, GAO has compiled a large body of work on courthouse construction and federal real property. The General Services Administration (GSA) owns federal courthouses and funds related expenses from its Federal Buildings Fund (FBF)--a revolving fund used to finance GSA real property services, including the construction and maintenance of federal facilities under GSA control. The judiciary pays rent to GSA for the use of these courthouses, and the proportion of the judiciary's budget that goes to rent has increased as its space requirements have grown. In December 2004, the judiciary requested a $483 million permanent, annual exemption from rent payments to GSA to address budget shortfalls. In this testimony, GAO (1) summarizes its previous work on courthouse construction and (2) provides information on FBF and GAO's ongoing work on the federal judiciary's request for a permanent, annual rent exemption of $483 million from rent to GSA. GAO's courthouse construction work to date has focused primarily on courthouse costs, planning, and courtroom sharing. In the 1990s, GAO reported that wide latitude among judiciary and GSA decision makers in choices about location, design, construction, and finishes often resulted in expensive features in some courthouse projects. The judiciary has since placed greater emphasis on cost consciousness in the guidelines for courthouse construction that it provides to GSA. Related to planning, GAO also found in the 1990s that long-range space projections by the judiciary were not sufficiently reliable, and that the judiciary's 5-year plan did not reflect all of the its most urgently needed projects. The judiciary has made changes to improve its planning and data reliability. During previous work, GAO also found that the judiciary did not track sufficient courtroom use data to gauge the feasibility of courtroom sharing. GSA has been unable to generate sufficient revenue through FBF over the years and thus has struggled to meet the requirements for repairs and alterations identified in its inventory of owned buildings. By 2002, the estimated backlog of repairs had reached $5.7 billion, and consequences included poor health and safety conditions, higher operating costs, restricted capacity for modern information technology, and continued structural deterioration. GSA's inability to generate sufficient revenue in the past has been compounded by restrictions imposed on the rent GSA could charge federal agencies. Consequently, GAO recommended in 1989 that Congress remove all rent restrictions and not mandate any further restrictions, and the most restrictions have been lifted. Some narrowly focused rent exemptions, many of limited duration, still exist today, but together they represent roughly a third of the $483 million permanent exemption the judiciary is currently requesting from GSA. The judiciary has requested the exemption, equaling about half of its annual rent payment, because of budget problems it believes that its growing rent payments have caused. GSA data show that GSA-owned space, occupied by the judiciary, has increased significantly. GAO is currently studying the potential impact of such an exemption on FBF, but past GAO work shows rent exemptions have been a principal reason why FBF has accumulated insufficient money for capital investment.
Various organizations must be able to operate for the U.S. securities markets to function. Individual investors and institutions such as mutual funds send their orders to buy and sell stocks and options to broker- dealers, which route them to be executed at one of the many exchanges or electronic trading venues in the United States. After a securities trade is executed, the process known as clearance and settlement occurs that ensures the accuracy of the trade, transfers ownership of the securities from the seller to the buyer, and exchanges the necessary payment between these two parties. Separate organizations perform this process for stocks and for options, while a single depository maintains records of ownership for the bulk of the securities traded in the United States. Banks participate in the U.S. securities markets by acting as clearing banks that maintain accounts for broker-dealers to accept and make payments for these firms’ securities activities. The payments that are exchanged between the banks of clearing organizations, broker-dealers, and their customers are processed by systems operated by the Federal Reserve or other private payment system processors. Virtually all of the information processed is transferred between parties through telecommunications systems; as a result, the securities markets depend heavily on the telecommunications industry’s supporting infrastructure. Although thousands of entities are active in the U.S. securities markets, certain key participants are critical to the ability of the markets to function. Some are more important than others because they offer unique products or perform vital services. For example, markets cannot function without the activities performed by clearing organizations; and in some cases, only one clearing organization exists for particular products. In addition, other market participants are critical to overall market functioning because they consolidate and distribute price quotations or information on executed trades. Other participants may be critical to the overall functioning of the markets only in the aggregate. For example, if one of the thousands of broker-dealers in the United States is unable to operate, its customers may be inconvenienced or unable to trade, but the impact on the markets as a whole might be limited to a reduced liquidity or less price competitiveness. However, a small number of large broker- dealers account for sizeable portions of the daily trading volume on many exchanges. If several of these large firms were unable or unwilling to operate, the markets might not have sufficient trading volume to function in an orderly or fair way. Several federal organizations oversee the various securities market participants. SEC regulates the stock and options exchanges and the clearing organizations for those products. In addition, SEC regulates the broker-dealers that trade on those markets and other participants, such as mutual funds, which are active investors. The exchanges also have responsibilities as self-regulatory organizations (SRO) for ensuring that their participants comply with the securities laws and these organizations’ own rules. To oversee the operational risks at the securities exchanges and clearing organizations, SEC published its Automation Review Policy in 1989, which advised SROs prospectively of SEC’s expectations on how these organizations should address information dissemination and physical security and business continuity challenges. ARP staff conduct reviews of how these organizations are addressing SEC’s expectations in these areas. Additionally, several federal organizations have regulatory responsibilities over banks and other depository institutions, including those active in the securities markets. The Federal Reserve oversees bank holding companies and state-chartered banks that are members of the Federal Reserve System. The Office of the Comptroller of the Currency (OCC) examines nationally chartered banks. To ensure that the functioning of the financial markets is protected, the financial sector is one of several key infrastructures that the United States has designated as critical to our nation. To protect these infrastructures, the Homeland Security Act of 2002 created the Department of Homeland Security (DHS) and gave it wide-ranging responsibilities for leading and coordinating the overall protection effort for the nation’s critical infrastructure. Homeland Security Presidential Directive 7 further defines these responsibilities for DHS and those federal agencies given responsibility for particular industry sectors such as telecommunications or banking and finance, known as sector-specific agencies. The Department of the Treasury (Treasury) is the federal agency responsible for infrastructure protection activities in the banking and finance sector, which includes coordinating and collaborating with relevant federal agencies, state and local governments, and the private sector. The threats for which organizations in the financial and other critical sectors must be prepared vary. As the events of September 11 illustrated, terrorist activity can pose a significant threat to U.S. entities. Events such as attempts to bomb key facilities can significantly impair the operations of an affected organization and events involving nuclear, radiological, or chemical hazards could cause substantial damage to key facilities or necessary infrastructure over a wide area or render such facilities and infrastructure inaccessible for extended periods. Similarly, major natural disasters such as hurricanes, tornados, or earthquakes also can result in wide-scale damage or make areas inaccessible just about anywhere in the United States. In addition to events that cause physical damage, financial market organizations remain a prime target for individuals or organizations seeking to use cyber attacks to obtain unauthorized access or prevent legitimate users from accessing the key networks and systems upon which the financial markets depend. Moreover, concern has grown about the threat of an influenza pandemic and the impact it could have on the operations of entities in the United States, including those in the financial markets. With individuals in other countries having already have fallen ill and died as a result of the H5N1 strain of avian flu, the U.S. government is urging all businesses to prepare for a pandemic. The pandemic threat is different than those previously envisioned because it could affect large numbers of people simultaneously, with waves of illness occurring for weeks at a time over the course of several months. Since our last report, all seven organizations whose operations we considered critical to the overall functioning of U.S. securities markets have in place business continuity capabilities that reduce their vulnerability to disruption by a wide-scale disaster. These capabilities include having backup operating sites that have staff capable of performing the organizations’ critical tasks and that are geographically distant from their primary operating locations. All seven critical organizations have taken steps to reduce the likelihood that power and telecommunications outages will affect their operations and all have tested their business continuity capabilities by running simulations or performing live processing of their primary activities from backup locations. All seven critical organizations are developing business continuity plans to address the risk of infectious pandemics, although at the time we reviewed these organizations only one had fully developed a plan that incorporates the various elements needed to address such an occurrence. Each of the seven organizations also has continued to enhance the measures it uses to prevent physical attacks from disrupting its operations, with those that still had vulnerabilities using their business continuity capabilities to mitigate those weaknesses. Each organization continued to improve the information security measures intended to mitigate the risk of electronic attacks, including taking or considering additional actions we identified that could further improve their information security. Representing many of the most active market participants, the large broker-dealers and banks that we contacted also have continued to improve their disaster-recovery capabilities. Although by maintaining their trading staff in single locations increases the risk that they will be unable to resume activities promptly after a wide-scale disaster, the major broker-dealers we reviewed have implemented various measures to mitigate such risks, including cross-training staff and establishing dispersed backup trading locations. Since our 2004 report, all the critical organizations have established business continuity capabilities that reduce the likelihood that a wide- scale physical disaster would disrupt their key operations. When we last reported, four of the seven organizations had established backup sites capable of performing the key activities they needed to be operational and located them at considerable distances from their primary sites to reduce the likelihood that a disaster, even a wide-scale event, would render both locations unusable. However, at that time, we also reported that three of the critical organizations lacked business continuity capabilities that likely would have allowed them to resume operations shortly after such disasters. For example, one of these organizations had a backup site that it could use to conduct its key activities, but this site was within a few miles of its primary location and therefore also could have been rendered unusable in a wide-scale disaster. As of September 2006, all seven critical organizations now have geographically distant backup sites or other means of conducting their key operations. For example, one of the organizations previously lacking a geographically dispersed site has completed a new data center that is more than 1000 miles from its primary operating locations and that now is capable of conducting all the key processing that the organization would need to be operational. Because the distance between sites is too great to allow both the primary and the backup site to process identical data simultaneously, the organization has implemented a proprietary hardware based data replication technology that ensures that copies of all production data and processing results from the primary sites are stored and then transmitted to the remote site. Since installing this technology, the organization’s staff indicated that it has significantly reduced the time required to have the remote site take over operations to less than 2 hours with less than a minute of data loss if a disaster were to affect both primary processing sites. Rather than establishing a geographically distant site that exactly duplicates its primary site, another of these three organizations instead acquired the capability to conduct its critical trading activities through an electronic system whose processing location is located more than 700 miles from the organization’s current operating site. Finally, to better ensure that it would be able to operate in the aftermath of a wide-scale disaster, the last of these three organizations installed hardware capable of performing its critical processing operations at a site that is more than 200 miles from its current primary operating location. In addition to these three organizations, the other four have also improved their business continuity capabilities to further reduce their vulnerability to such events. For example, one organization that when we last reported had established a backup data center more than 700 miles from its headquarters and primary operating location changed how it operates so that it now conducts its live critical business processing from the geographically distant site and uses its former primary processing site as its backup location. According to the staff of this organization, they transferred the operations to the more distant site because it is located in an area they deemed at lower risk than its current headquarters and former processing location, which is located in a downtown urban area that they believe is more likely to be at risk for terrorist activities than the new primary processing location. Although the organization likely may have reduced its risk of disruption from terrorist activities, its new primary location may be at greater risk of damage from natural disasters, such as hurricanes or tornados, than its headquarters location. When we last reported, another of the critical organizations had three locations at which it could conduct its critical processing operations; a primary operating site, a secondary site that could quickly take over processing if a disaster damaged the primary site, and a tertiary site that could become operational within 24 hours if the backup site were not available. Since then, this organization lowered its vulnerability to disruption by changing the configuration of its data centers to provide greater distance between its primary and secondary sites, increasing the distance between these sites by hundreds of miles. In addition, two organizations have increased their recovery capabilities by establishing sites hundreds of miles from the primary site that are capable of monitoring and operating critical networks at the primary location. These remote command centers give the organizations the ability to maintain or resume operations if their primary site became inaccessible, but was not destroyed. By establishing these dispersed operating capabilities, all the organizations have addressed another potential weakness—the concentration of staff in one location or a geographic area—that previously increased their vulnerability to a wide-scale disaster. When we last reported in 2004, several of the critical organizations faced greater risk that their operations could be disrupted by disasters because the staff they needed to perform their critical business operations were located in just one location or in multiple locations near each other. However, now all seven organizations have taken steps to ensure that they will have staff capable of performing their critical activities in the event of a wide-scale disaster, either by establishing backup operating locations or making other arrangements to have sufficient staff to conduct the organizations’ critical operations. These operations include backup data-processing centers and alternative site business operating centers that have staff that perform critical non- data-processing activities, such as assisting customers or performing activities requiring manual processing. The seven critical market organizations also have reduced the likelihood that their operations would be disrupted by disasters that affect their power or telecommunications services. For example, all organizations installed generators capable of supplying their operations sites with power if they lose power from their local utility. These organizations generally had fuel supplies on hand that would be sufficient to run these generators from 3 to 7 days. During the August 2003 power failure that affected the Northeast, all seven critical organizations successfully provided service to their customers and members without interruption. Similarly, the organizations also all have taken steps to reduce the likelihood that they would lose their telecommunications service. For example, all the organizations had registered the circuits that carry their important telecommunications traffic with the National Communications System’s Telecommunications Service Priority (TSP) program, which would provide increased priority for restoration of these key circuits in the event of a disruption. Several of the organizations also now increasingly receive information from their members through more resilient telecommunications networks. For example, the Secure Financial Transaction Infrastructure (SFTI) was created to provide a more reliable and “survivable” private communications network that links exchanges, clearing organizations, and other financial market participants. To ensure resiliency and eliminate single points of failure, SFTI employs redundant equipment throughout, and carries data traffic over redundant fiber-optic rings that have geographically and physically diverse routes. To improve the resilience of the communications for clearing securities transactions, the Securely Managed and Reliable Technology (SMART) network has been created that allows market participants to exchange information with clearing organizations over private high-bandwidth networks that automatically route traffic over alternate paths in the event that any part of the network is damaged. In addition, one of the critical organizations we reviewed formerly received data from its broker-dealer customers through direct connections to its data centers—often from just a single customer’s location. However, this organization now has a network configuration in which the customers connect at multiple points to a new redundant fiber- optic ring network, reducing the likelihood that customers would be unable to communicate with the organization. Moreover, the seven critical organizations have tested their business continuity capabilities and plans—although some more fully assessed the ability of their backup arrangements than others. Routinely using or testing recovery and resumption arrangements ensures that backup arrangements can perform critical operations and that all customers or others that must connect to an organization are able to do so. Some of the critical organizations have conducted very robust testing of their ability to operate from other locations outside their primary location. For example, at least two of the critical organizations operated data centers that receive all the data needed to process their operations and had run live processing for actual business days from their non-primary locations. In contrast, another organization regularly tested the operational condition and connectivity of its equipment at its back up site and ran exercises with small numbers of staff at this site to simulate its critical activities, but had never attempted to conduct an actual business day from this backup location. One organization had used the systems it would need to operate if its primary location were damaged for some live processing but had not yet fully tested whether these systems had adequate capacity to process the organization’s full operating volume of data. In recognition of the increased concerns of a pandemic influenza outbreak, the seven critical organizations also were in the process of developing business continuity plans to address the potential impacts of a pandemic on their operations, although only one has completed a formal plan. To determine elements that could be considered as part of business continuity planning for a pandemic, we identified various documents issued by private sector organizations, government bodies, and financial regulators. These included a paper issued by the Financial Services Sector Coordinating Council for Critical Infrastructure Protection and Homeland Security (FSSCC), which includes representatives of various financial market trade associations, market organizations, and others. The FSSCC pandemic paper outlined numerous issues that organizations should consider, as well as one issued by a risk and insurance services firm that included actions to consider taking before, at onset, and throughout the event. In addition, we reviewed issuances by U.S. banking regulators, as well as those from other U.S. and international organizations. By analyzing these documents, we identified four elements that we used to evaluate the seven critical financial market organizations’ pandemic planning efforts, including: A preventive program to reduce the likelihood that an organization’s operations will be affected, including monitoring of potential outbreaks, educating employees on the disease and how to minimize its transmission, and providing disinfectant soaps and hand sanitizers in the work place. A formal plan that includes escalating responses to particular stages of an outbreak, such as first cases of humans contracting the disease overseas, first cases within the United States, and first cases within the organization itself. Facilities, systems, or procedures that provide the organization the capability to continue its critical operations in the event that large numbers—as many as 40 percent by some estimates—of an organization’s staff will be unavailable for prolonged periods. Such procedures could include social distancing to minimize staff contact, teleworking, or conducting operations from alternative sites. A testing program to better ensure that the practices and capabilities that an organization implements to address a pandemic will be effective and allow it to continue its critical operations. The guidance that U.S. and international entities have issued also include other elements that organizations could take into account to produce an effective business continuity plan for a pandemic, including developing appropriate compensation and sick leave policies and establishing communication mechanisms, such as hotlines, to aid in providing information to employees and customers. The seven critical organizations all were conducting activities to help them prepare business continuity plans to address pandemic risks. For example, one organization has begun to analyze which staff would be considered critical and how the organization could continue operations if as many as 70 percent of its total staff were not available—a higher percentage than some organizations are projecting could be affected. Staff at two of the organizations told us that they had begun training alternate staff to perform critical duties normally done by other staff. Staff at one of the organizations described conducting a “tabletop” exercise in which their staff discussed what actions they would take and what challenges they would face in a pandemic scenario. At the time we visited these organizations, only one of the seven organizations had a fully developed plan for addressing pandemic threats in place with detailed response plans for each business unit. Another of the organizations has a draft plan in place, although at this time it does not include information on how specific business functions will be maintained across varying absence levels. The other organizations, while not having formal plans completed, have gone through various planning efforts, such as verifying that staff can work from multiple locations and then expanding the number of communications channels available from remote locations as needed. Depending on how an influenza pandemic spreads, the impact on some of these organizations might somewhat be mitigated because of their existing dispersed business continuity capabilities. However, health organizations have cautioned that with global airline travel available, any disease outbreak could occur quickly and be widely spread within a short period of time, an occurrence that would reduce the protection that dispersed facilities provide. The seven critical market organizations have continued to implement physical security measures to reduce the potential for physical attacks on their facilities. To assess the actions taken by the critical organizations since our last report, we discussed and inspected the security measures in place at these organizations. Based on these assessments, we found that organizations had continued to improve their physical security. For example, one organization has installed barriers that create a fixed holding area for vehicles undergoing security checks before allowing them to approach its facility. This same organization has reduced the likelihood that its facility will be damaged by bombs by installing thicker, more blast- resistant walls and glass. To further improve its security, another organization added a new armed security post to mitigate potential risks from nearby vehicular traffic and commercial sites and additional surveillance cameras capable of providing wider views of the area around its primary site. But, some organizations continue to face challenges in limiting the potential for physical attacks on their facilities. For example, one organization is in the process of moving its primary and backup operations from its own secured facilities to sites that a contractor operates. Through inspection of one of these new facilities, we determined that it had various physical security measures in place, including a fenced perimeter and inspections of packages and visitors. However, this new site had less imposing barriers around it and was located closer to roads around the facility than the organization’s previous primary operating site. Several of the other organizations also had continuing physical security vulnerabilities at their primary sites, such as being located in multitenant buildings or not having the ability to limit vehicular traffic around their facilities. However, the risk of any of these new or remaining physical security vulnerabilities at the seven organizations’ primary sites largely has been mitigated by each having implemented geographically dispersed capabilities for conducting their critical activities. The seven critical organizations also have continued to make progress in enhancing their information security. To assess the actions taken by the critical organizations since our last report, we reviewed documentation for any new systems, networks, and security measures at these organizations and discussed them with the organizations’ staff. Based on these assessments, we determined that the seven organizations were continuing to implement sound information security practices, such as using firewalls or other controls to limit unauthorized access, expanding their use of systems to detect intrusions, conducting more extensive assessments of their systems’ security vulnerabilities, and implementing the improvements we identified in our previous reviews. However, in some cases organizations have put in place new systems architectures that potentially introduce new vulnerabilities. As a result, we identified additional ways in which the organizations could improve their information security, measures that all the organizations either had begun implementing or were considering. Since our 2004 report, the banks and broker-dealers that are key participants in the U.S. securities markets have made considerable progress in improving their resiliency, but certain wide-scale disasters could significantly disrupt their ability to conduct trading activities. We spoke with six firms, including four broker-dealers that conduct significant volumes of trading on U.S. securities markets, and two banks that are responsible for the clearance and settlement activities necessary to ensure that securities ownership and payments are appropriately transferred. If firms such as the six described above were unable to conduct the processing needed to clear and settle securities transactions after a disaster, the resulting failures to pay for and deliver securities could lead other firms to be unable to make subsequent payments or deliveries, resulting in a potential systemic financial crisis. In addition, if sufficient numbers of broker-dealers were not able to resume trading activities when appropriate, the ability of U.S. trading markets to function could be impaired. In response to expectations by financial regulators, since the 2001 attacks these broker-dealers and banks have improved the resiliency of their clearing and settling operations by increasing the geographic distance between the primary and backup sites that conduct such operations. For example, all six of the firms have established primary data centers in locations outside of New York City. In addition, one of these firms has established a new backup data center overseas. According to firm officials, all but one of these facilities are operational, with the last one to be completed by March 2007. Three of these firms have gone beyond regulators’ expectations to establish a third data center that provides an additional level of backup for clearance and settlement activities. One firm has even established a fourth data center, and another has a fourth under construction. In addition, staff at all six firms told us that they routinely use or test their recovery and resumption arrangements to ensure that they can recover and resume their clearance and settlement activities within the time frames expected by the regulators. Although firms have strengthened the resiliency of their clearing and settling operations, their trading activities remain vulnerable to disruption because all key trading staff are still concentrated in one geographic area. To conduct trading, broker-dealers generally operate trading floors where their traders receive orders from customers and enter these into electronic systems for execution at an exchange, electronic market, or other venue. The firms process the information the trading systems produce at data centers. Based on our discussions with these broker-dealers, these firms have established multiple data centers, including those outside the area. However, all these firms’ key staff who trade U.S. stocks are located at trading floors in or near the New York City financial district. Since the attacks on September 11, two of these firms moved their trading floors from lower Manhattan to midtown, which may reduce the risk of a trading disruption following a localized attack or other disaster in lower Manhattan. But, the stock traders still work in one relatively small geographic area and rely on some of the same infrastructure. For example, they share the same public transportation system. This concentration of traders poses a risk to trading activities because it could prevent firms from promptly resuming trading after a wide-scale physical disaster, a vulnerability that we initially noted in our 2004 report. (We discuss how SEC is addressing this risk later in this report.) Similarly, such staff are also at risk from a pandemic outbreak. Nevertheless, the firms we reviewed have taken a variety of steps to mitigate the risks to their ability to trade. For example, all firms have implemented backup trading floors, which would allow them to conduct their trading activities at an alternate site if their primary trading floors were unusable or inaccessible. All of the firms have conducted some trading from their backup floors at least once, on occasions such as the 2004 Republican National Convention and the 2005 transit workers’ strike (both of which events resulted in reduced accessibility to Manhattan). In addition, officials at one firm said that they have some ability to conduct trading in U.S. securities from an overseas location. According to SEC, other firms also are exploring the possibility of conducting such trading from overseas. However, some of the firm officials with whom we spoke said that they were reluctant to permanently split their trading staff between multiple locations for business reasons. For example, a firm that separates its trading staff could suffer losses in productivity, since traders could lose the immediate access to market information and institutional knowledge that is gained from the concentration of traders on a single trading floor. Similarly, all six firms that we spoke with have been working to integrate pandemic planning into their business continuity plans. For example, several of these firms have established internal committees or task forces to oversee their continuity planning for a pandemic. These internal committees have developed relationships with the World Health Organization (WHO) and the Centers for Disease Control and Prevention (CDC) as well as local public health authorities and have consulted with medical experts. Moreover, these firms have joined other market participants and financial regulators at numerous meetings and tabletop exercises since late 2005 for pandemic planning. Firm officials noted that pandemic planning involves new considerations and scenarios that had not been part of traditional business continuity planning. For example, traditional plans would address the loss of facilities but not loss of staff; as a result, business continuity plans needed to be modified for a pandemic to deal with the potential reduction in staff able to work during the weeks, or even months, of a pandemic outbreak. Financial market participants, in conjunction with regulators and other organizations, have made various efforts to improve the overall resiliency of the financial sector. Their actions include industry-wide connectivity testing from backup locations, expert physical security assessments of selected financial market organizations, and exercises of various disaster scenarios that include financial market participants. Financial regulators also have been assisting and promoting the creation of regional coalitions that allow financial market participants to obtain information from and interact with government and law enforcement bodies during actual disasters. Although efforts to further improve the resiliency of the telecommunications infrastructure have identified additional challenges, public and private groups continue to work together to find potential solutions, including developing ways to allow organizations to map the physical routing of their circuits and analyzing how increased teleworking during a pandemic might increase demands on telecommunications network capacity. To provide assurance that securities market participants can perform critical activities in the event of a disaster, industry organizations have continued to conduct an annual industry-wide connectivity test. The Securities Industry Association (SIA), together with the Bond Market Association, the Futures Industry Association and the Financial Information Forum led a test on October 14, 2006, the second year for this industry-wide effort. The objectives of the test were to (1) exercise and verify the ability of market participants to operate through an emergency using backup sites, recovery facilities, and backup communications capabilities across the industry; and (2) provide participants with an opportunity to exercise and check the ability of their backup sites to successfully transmit and receive communications between the backup sites of other market participants. More than 250 organizations, including broker-dealers, markets, service bureaus, and industry utilities participated, with test participants representing more than 80 percent of normal market volume. In addition, new test components were added to the 2006 test, such as money markets and payment system processors. Test results showed a 95 percent success rate overall for successful test connections. According to association officials who assisted with the test, none of the participating exchanges or firms experienced any significant complications and when problems did arise, most were resolved quickly, allowing the test orders to be placed and processed. According to a Bond Market Association official, the test was very successful and it gave them confidence that all facets of the industry would be able to operate effectively during emergencies. Some of the preliminary lessons learned from the 2006 test are that while industry participants have been adept at resolving technical issues related to market performance when they occur, firms still need to regularly and frequently test their backup connections to market entities. Furthermore, firms and market entities must ensure that they can reach employees with key technical knowledge during emergencies. In addition to tests within the financial markets community, cross-sector exercises have helped provide an important perspective on interdependencies across industries and how those dependencies can affect businesses’ resiliency. Officials from Treasury and representatives of selected financial markets participated in two such efforts conducted by DHS. These tests—TOPOFF 3 (top officials) and Cyberstorm—were tabletop exercises, meant to create lifelike scenarios of disasters that force participants to look at the effect of cross-sector dependency (or interdependencies) in such catastrophes. In addition to participating in these tests, SIA and the Bond Market Association used TOPOFF 3 to test their crisis communications tools and techniques—the industry’s emergency alert systems that notify participants to convene and join a series of conference calls. The purpose of the conference calls is to evaluate the condition of the firms on Wall Street, relate that status to regulatory bodies that would be considering early market closings or other measures to deal with a crisis, and then transmit those instructions back to the individual firms. SIA officials reported that the tests were successful and served to identify areas in which improvements were needed, such as ensuring that all contact numbers were up-to-date and making sure that the timing, length, and sequence of calls were realistic. According to Treasury officials, they have also sponsored several exercises for the financial services sector, including some that focus on avian flu. These have been conducted with financial institution and local government representatives in various locations around the country. In addition to national cross-sector exercises, DHS has been assisting individual firms and organizations by conducting on-site physical security assessments of various financial market organizations. Members of the Risk Management Division at DHS conduct the assessments, which include a review of an organization’s facility and physical security measures such as surveillance, perimeter, and intrusion technologies. DHS prepares a group of reports that vary by security classification and provides them to the organizations with their findings and recommendations. DHS performed 19 of these assessments from fiscal years 2003 through 2006, with 21 planned for fiscal year 2007. Locations included primary facilities in multiple urban locations, as well as several key remote backup centers across the country. Financial regulators also have been promoting regional coalitions to improve information sharing and response during disasters. Financial market participants have formed coalitions in cities and across wider areas such as states that allow financial market organizations to obtain information from local government, law enforcement, and other first responder organizations during actual disasters. The financial sector in Chicago formed the first of these coalitions, known as ChicagoFIRST, which sends representatives to the local emergency response command center in the event of a disaster affecting that city. This allows the ChicagoFIRST representatives to obtain accurate and timely information about what actions governmental and other bodies are taking during the event. The representatives then share the information with financial market organizations to better allow them to take appropriate actions. Coalitions also can facilitate other information-sharing efforts. For example, in July 2004, ChicagoFIRST, the City of Chicago’s Office of Emergency Management and Communications, and Treasury conducted a tabletop exercise for the local financial sector. The exercise provided an opportunity for Chicago’s financial community and federal, state, and local government officials to practice crisis response protocols to simulated emergency scenarios. Based on the success of the ChicagoFIRST model, Treasury published a handbook to guide such efforts in December 2004. As of January 2006, the cities of Los Angeles, San Francisco, and Minneapolis and the State of Florida formed similar local collaborative efforts. Financial market organizations also have participated in other information-sharing forums and benefited from federal dissemination of information and analyses. To assist in infrastructure protection issues, the Financial and Banking Information Infrastructure Committee (FBIIC), which includes representatives from a broad range of financial regulatory agencies, meets regularly to improve coordination and communication among financial regulators and enhance the resiliency of the financial sector. In addition, FSSCC, which includes representatives of the financial trade associations and other entities, provides one mechanism for sharing information relating to infrastructure protection among financial market participants. FSSCC works to help reinforce the financial services sector’s resilience against terrorist attacks and other threats to the nation’s financial infrastructure. Formed in 2002, FSSCC acts as the private sector council that assists Treasury and DHS in addressing critical infrastructure protection issues within the banking and finance sector. FSSCC has published reports summarizing best practices and lessons learned for issues of common concern to the industry at large. Members of FSSCC also meet periodically with the financial regulators to share information about common concerns and challenges. Financial market organizations also have received consolidated information through other federal sources. For example, the Financial Services Information Sharing and Analysis Center (FS/ISAC) consolidates threat information for the sector. The financial services sector established FS/ISAC—and Treasury sponsored it—to encourage the sharing of information on physical and cyber security threats between the public and private sectors to protect critical infrastructure. Between 2004 and 2005, FS/ISAC’s membership grew more than 200 percent, to more than 1,800 member-organizations that receive alerts and other information directly and another 7,000 organizations that receive such information via an industry association. The alerts and information now reach 34 percent of the industry. FS/ISAC also conducts threat intelligence conference calls at the unclassified level every 2 weeks for members, with input from DHS. Treasury similarly hosts a similar biweekly threat conference call with representatives of the financial regulators and DHS. Both sets of calls discuss recent physical and cyber threats, vulnerabilities, and incidents. The potential threat of a pandemic is another area in which regulators and market participants are working together to share information and increase overall preparedness. FBIIC created a working group to address pandemic flu issues that has been holding meetings among both FBIIC and FSSCC members. Treasury representatives also have participated in several working groups established by the Homeland Security Council to address pandemic flu issues. In addition, FSSCC issued a statement and issue paper on preparations for avian flu to provide guidance for financial institutions considering how to prepare for the potential of a serious influenza epidemic. The paper presents 31 key issues that financial institutions might consider in developing their plans. Some examples of the issues include the identification of critical operations (those needed for weeks or months, not days); methods of splitting and segregating staff; expanded use of tele- and videoconferencing; and coordination with local emergency management and public health organizations. In addition to publishing the statement, FSSCC formed an Infectious Disease Forum that is being led by the SIA on FSSCC’s behalf. The group meets quarterly, including joint sessions with a similar pandemic working group run by federal regulators. The forum provides a venue for FSSCC members that have active avian flu working groups or are currently conducting research on this issue to collaborate and share information to prepare for a possible influenza pandemic or other infectious disease outbreak. FSSCC also provides additional information on pandemic issues on its website. Lastly, several US financial services firms participated in a recent 6-week, market wide pandemic exercise in the United Kingdom. The exercise ran in October and November 2006, with 70 organizations and about 3,500 staff from across the financial sector taking part. Officials from the U.S. federal regulator community provided input into the scenario planning of the event. UK officials who ran the exercise stated in the summary report that an important next step would be to work with their international regulatory partners to ensure cross-border regulatory coordination—and thus that global financial markets will be able to continue operating in a pandemic. Since the 2001 attacks, financial regulators, market participants, and other organizations have engaged in various efforts to improve the resiliency of the telecommunications infrastructure upon which the markets depend, but clear resolutions to the various challenges have proved difficult to identify. As we reported in 2003, September 11 showed that such events can have significant effects on the telecommunications services that support the U.S. financial markets. Although some financial market participants attempted to ensure that they would not lose telecommunications service by contracting with more than one telecommunications carrier, the attacks revealed that multiple carriers’ lines and circuits often traversed the same physical paths or relied on the same switching offices and thus were susceptible to damage from the same event. One way that financial markets organizations have attempted to address this problem is by exploring the feasibility of mapping the physical paths that individual organizations’ telecommunications circuits follow. However, completing such analyses has proved very time-consuming and expensive. According to a 2004 report by the President’s National Security Telecommunications Advisory Committee (NSTAC), carriers would have to use labor-intensive, manual processes to ensure route diversity and monitor that condition on an ongoing basis. The NSTAC report further stated that guaranteeing that circuit routes would not be changed could make an organization’s service less reliable because its circuits could lose the benefit of technologies that automatically reroute circuits in the event of facility failures. To assess the feasibility of mapping physical circuit routing, the Federal Reserve participated in the National Diversity Assurance Initiative—a joint project between the Federal Reserve and various telecommunications carriers—that the Alliance for Telecommunications Industry Solutions (ATIS) conducted. After doing an initial assessment of the circuits, the initiative decided that conducting an end-to-end multi-carrier assessment of telecommunications circuits could only be conducted manually, a very labor and cost intensive process. The members of the initiative concluded that attempting such an analysis for large numbers of circuits in multiple organizations would be very difficult. As a result, the ATIS report indicated that an automated system would likely have to be developed to more efficiently track circuits across multiple carriers and make end-to-end diversity assessments and assurance feasible on any larger scale. The report recommended a small- scale follow-up effort to determine the objectives and requirements for a system that could provide end-to-end diversity assurance in a multicarrier environment. According to the report, the scoping effort should attempt to identify the high-level requirements, cost estimates, and level of effort needed to develop and implement an automated circuit assurance solution. Since this report was issued, the National Communications System (NCS) within DHS, which is responsible for administering the federal national security and emergency preparedness telecommunications programs, has agreed to lead an effort—the Diversity Assurance Analysis—to explore the potential for developing automated solutions to the circuit diversity problem. Telecommunications providers are also attempting to improve the resiliency of the infrastructure upon which the financial markets depend. As we previously reported, much of the disruption to voice and data communications services throughout lower Manhattan—including the financial district—that stemmed from the 2001 attacks occurred when one of the buildings in the World Trade Center complex collapsed into an adjacent telecommunications center, which served as a major local communications hub within the public network. Since then, the provider that operates this facility has been rebuilding portions that were damaged or lost in the attacks, using designs that provide greater resiliency and redundancy to their infrastructure in lower Manhattan. For example, the provider has reinforced the storage area for generator fuel with a protective wall and now routes the fuel through concrete-lined conduits. The provider also has updated parts of its network to use more resilient advanced switches and used more fiber-optic cables, which are smaller but can carry more message traffic. Financial market regulators and participants also have become concerned about the potential impact of a pandemic on telecommunications resiliency. As many financial market organizations have begun considering how best to ensure business continuity in during a disease outbreak, many (including some of the broker-dealers that we contacted) considered having large numbers of their employees telecommute. However, concerns have been raised about whether the existing telecommunications networks would have adequate capacity for absorbing the additional data and voice communications traffic. For example, all the calls that originate in individual neighborhoods usually must go through a single set of switches before reaching the larger-capacity and more redundant telecommunications network. It is not known whether the lines and switches serving individual neighborhoods or areas would have sufficient capacity, particularly since more people overall may be home during a pandemic, as a result of school or workplace closings. For example, in a June 2006 testimony before Congress, an FSSCC official stated that the financial markets community did not have enough information to determine whether the nation’s telecommunications infrastructure could support a rapid and explosive increase in users on specific networks. Consequently, FSSCC recommended that NSTAC be asked to research this issue and identify any recommendations to ensure that the telecommunications sector’s networks were robust enough to meet other sectors’ demands during such a potentially stressful time. In addition, in November 2006, FSSCC and telecommunications carriers agreed to collaborate on an NCS study about the potential impacts of a pandemic on telecommunications infrastructure. The study will focus on the technical feasibility of national policy and business continuity planning related to telecommuting in response to the pandemic influenza threat. According to an NCS official, previously completed models on this issue indicate that sufficient bandwidth to accommodate increased traffic during a pandemic appears to exist on a national level, but problems could be experienced in the individual neighborhood or commercial area connections points, which are the “first mile” or “last mile” of the connection to the national system. The financial market participants from FSSCC will assist NCS by contributing their business continuity telecommuting plans and estimated traffic load during a pandemic. These plans will be used in examining potential access network issues for the financial community and serve as an example for other industries in predicting the potential change in traffic on access networks. Telecommunications carriers will provide estimates of potential surge traffic from the general public during a pandemic using related historical data (e.g., snowstorms). The financial community anticipates benefits from this study would include recommendations on mitigation measures that could be implemented either in advance or in real time for the various impact levels possibly encountered during a pandemic. Federal financial regulators have taken a variety of steps to strengthen the ability of the U.S. securities markets to recover from a wide-scale disaster. In 2003, regulators jointly issued business continuity guidance to strengthen the resiliency of key organizations and firms that clear and settle transactions in critical financial markets. The regulators expect these organizations to be able to recover and resume their clearing and settlement activities on the same business day on which a wide-scale disruption occurs. Since 2003, regulators have conducted examinations and determined that all of these organizations and firms have substantially implemented this guidance or will soon do so. SEC and banking regulators also have been reviewing the planning that organizations that participate in the securities markets are doing to address pandemics, but have not other actions that could improve readiness. SEC has issued expectations that markets be prepared to resume trading promptly after disasters, and its staff have taken steps to assure themselves that large market participants have taken sufficient actions to increase the likelihood that U.S. markets would resume trading. SEC staff also plan to do more focused reviews of broker-dealer trading readiness. SEC also has taken actions to improve the ARP program that it uses to oversee systems operations issues at the markets and clearing organizations, including increasing staffing levels and expertise and preparing a rule mandating compliance with the ARP program’s tenets for which it expects to seek approval during 2007. Since 2003, federal financial regulators have worked in a coordinated manner to assess and improve the resiliency of the U.S. securities markets with respect to clearance and settlement activities. As we noted in our last report, in April 2003, SEC, the Federal Reserve, and OCC jointly issued the Interagency Paper on Sound Practices to Strengthen the Resilience of the U.S. Financial System (Sound Practices). The Sound Practices paper establishes business continuity expectations for the clearance and settlement activities of organizations that support critical financial markets. These organizations include the core clearing and settlement entities that process securities transactions (core organizations) and firms that play a significant role in critical financial markets (significant firms)— generally defined as those firms whose participation in the markets results in their consistently clearing or settling at least 5 percent of the value of the transactions in any of the product markets specified in the paper. The agencies expect these organizations must be able to recover and resume their clearing and settlement activities on the same business day on which a wide-scale disruption occurs. To achieve this goal, the organizations would maintain geographically dispersed facilities and resources and routinely use or test their recovery and resumption arrangements to ensure their effectiveness. Since issuing the paper, regulators have been conducting examinations of the organizations subject to these practices and have determined that those organizations have substantially achieved the capabilities envisioned in the Sound Practices paper or soon will do so. Specifically, SEC, the Federal Reserve, and OCC have reviewed firms’ primary and backup data center arrangements, the amount of time that it takes firms to recover their operations at their backup sites and firms’ tests of their backup arrangements. In an April 2006 report to Congress, the regulators reported that the core organizations all have data and operations centers that are geographically remote from their primary sites. Regulators also noted that several of these organizations share or periodically shift their operations between their primary and backup sites; this practice prepares them to continue their operations in the event of a disruption at either location. Although the significant firms initially were expected to be capable of resuming their clearing operations within the time frames in the Sound Practices paper, regulators extended this deadline for some firms because of the work and costs associated with implementing these practices. For example, when the practices were issued in 2003, one firm had just completed a new data center only several miles away from its primary site; as a result, this firm requested—and was granted—additional time to establish a geographically remote data center. According to the representatives of regulators and firms with whom we spoke, all significant firms likely will have sufficiently dispersed sites capable of conducting critical clearing activities by March 2007 and thus will have substantially achieved the practices. In contrast with the situation existing in 2001, the regulators conclude that by increasing the geographic diversity of their operating locations, the core organizations and significant firms significantly have increased the likelihood that critical financial markets will be able to recover clearing and settlement activities fairly rapidly after a wide-scale disruption. With most firms having sites allowing them to recover their operations within the Sound Practices time frames, regulators are expecting firms to conduct meaningful tests of these capabilities in the near term. In January 2006, SEC, the Federal Reserve, and OCC issued a detailed letter to all core organizations and significant firms, outlining expectations for the testing strategies that organizations and firms should use to verify their implementation of the Sound Practices. In this letter, regulators advised organizations and firms that they should have a comprehensive and risk- based testing approach that includes routine use or testing of recovery and resumption arrangements. In addition, the significant firms should assess whether their recovery arrangements were compatible with those of the core organizations. The fundamental testing concepts included in this letter are also being incorporated into a revised version of the business continuity planning guidance that the Federal Financial Institutions Examination Council—which issues guidance developed jointly by the various depository institutions regulators—plans to issue later this year. Banking and securities regulators have been working to assist market participants’ pandemic planning efforts, but have not taken other actions that could better assure that market participants adequately prepare for a pandemic. For example, the New York Stock Exchange (NYSE), which is a self-regulatory organization (SRO) that oversees its broker-dealer members, issued an information memorandum to provide guidance to member organizations about how to assess whether their business continuity and contingency plans would be suitable for a prolonged, widespread public health emergency. In a letter sent to securities exchanges and clearing organizations, the Acting Director of SEC’s Market Regulation Division noted that these organizations should promote planning and preparations to keep the markets operating during a pandemic. This letter notes that while securities exchanges and clearing organizations already have extensive business continuity programs, such plans are usually designed to address a discrete event and therefore may prove inadequate to address the potentially long-lasting impact of a pandemic, which could include multiple waves of outbreaks lasting 6 to 8 weeks. It also notes that federal, state, or local governments may take actions, such as quarantines, that may make it more difficult to maintain critical operations using remote backup sites. Although acknowledging that developing such plans would be difficult, the letter notes that such planning is necessary for organizations to analyze options and prepare for how the markets may function if confronted with an outbreak. In addition to this letter, SEC staff also have been speaking at forums such as conferences and meetings with market participants—industry trade associations, FSSCC—to share information about pandemic issues. Furthermore, SEC staff told us that they have also begun to review pandemic planning issues during inspections of exchanges, electronic markets, clearing organizations, and broker-dealers. In a joint notice from the regulators that oversee banks and thrifts, the agencies indicated that their institutions should review the U.S. government’s national pandemic strategy to consider what actions may be appropriate for their particular situation, and whether such actions should be included in their event response and contingency strategies. The bank regulators noted that financial institutions with a global presence and those considered critical to the financial system may have greater preparation and response challenges than those of other financial institutions. Bank regulation officials told us that they have also begun reviewing pandemic planning in the context of their ongoing supervisory activities. Lastly, SEC officials told us that they are beginning to work with the Securities Industry and Financial Markets Association to plan for a 4-week exercise beginning in September 2007 that will be modeled after the exercise conducted in the United Kingdom (discussed earlier in this report). This exercise will test how ready U.S. securities firms are to operate during a future flu pandemic. Although regulators have been actively addressing pandemic issues, they have not taken some additional actions that could improve readiness within the financial markets. For example, SEC and banking regulator staff told us that they are speaking about the need for financial institutions to prepare for a potential pandemic and they have issued general statements indicating that market participants should develop plans and provided issues to consider. However, none of these issuances specifically directed market participants to prepare plans likely to be effective in the midst of even the most severe outbreaks, which can result in significant levels of illness, deaths, transportation shutdowns, or constrained telecommunications capabilities. SEC staff told us that developing such plans is complicated. For example, important information for the effectiveness of the plans is not currently fully known, such as when and where outbreaks will occur, how virulent they will be, and how quickly they will spread. In addition, the actions that governments may take in response to a pandemic also are not certain, such as whether quarantines would be imposed or schools would be closed. As a result, the SEC staff said that financial market organizations will need to have flexible plans that accommodate various scenarios and actions. Regulatory staff also noted that the U.S. government has yet to establish dates by which other sectors should have complete plans. Given that state and local governments, or organizations in power, telecommunications, transportation, or other sectors upon which the financial markets depend may take a range of actions, such as quarantines, that could affect the viability of financial market organizations’ pandemic plans, clear expectations from regulators that financial market organizations’ plans should address such scenarios would provide greater assurance that all critical organizations and key market participants prepare plans that are sufficiently robust. Banking and securities regulators also have not set dates by which market organizations would be expected to have prepared at least an initial formal business continuity plan intended to ensure that critical operations can continue during a pandemic. Given that a pandemic could begin at any time, having complete formal plans in place beforehand would better ensure that financial market organizations could respond immediately. Completing such formal plans would allow exchanges, electronic markets, clearing organizations, broker-dealers, and banks to identify and begin acquiring any needed additional resources, such as medical supplies or computer hardware. In addition, completing initial plans soon would ensure the plans are appropriately approved by organization management and allow organizations to begin training employees and preparing communications for customers about possible changes in operating procedures during a pandemic. As part of preparing plans for pandemics, market participants have indicated that regulators should specify the types of regulatory relief that might be provided. Several of the broker-dealers with whom we spoke told us that they anticipated needing some form of regulatory relief in a pandemic situation. For example, broker-dealer staff likely would be working from home during a pandemic due to health concerns, and as a result, regulators might have to grant some relief from requirements that broker-dealer personnel be directly supervised. NASD, which is an SRO for its broker-dealer members, issued a notice seeking their input regarding what specific, short-term regulatory relief might be necessary to maintain market stability while still providing sufficient protections for investors. In providing comments to NASD, two trade associations for securities noted that such relief might be necessary to give broker-dealers the flexibility to operate when a large number of employees were not in their regular work space, either because they were sick, caring for others, or afraid to come into the office. While some employees might be able to work from nonregular locations, the trade associations noted that the requirement to register new temporary offices as new branch office locations may have to be suspended as was done after the September 2001 attacks and Hurricane Katrina. Another area in which relief might be needed would involve providing additional time for broker-dealers to submit personnel registrations and for those staff to fulfill continuing education requirements. Similarly, the associations noted that the time for conducting normal supervisory reviews should be extended during a pandemic because the personnel who perform such reviews were likely to be needed to help their firms in actual business activities. According to their comment letter, regulatory relief would be necessary no matter what method of operation a broker-dealer chooses because the number of absent employees likely would cause difficulties in promptly settling transactions and delaying many other activities. The associations urge the regulators to cooperate in a multiregulator process that coordinates granting relief as well as proposing that any trigger (such as a certain percentage infection rate that the Centers for Disease Control would declare) for the commencement of relief should occur at the same time across the markets. After collecting the information on what types and under what circumstances that regulatory relief may be needed, NASD officials indicated that they intend to work with SEC and other SROs to determine what relief may be appropriate. Similarly, to appropriately respond to such anticipated requests for regulatory relief, NYSE has filed a draft rule proposal with SEC seeking more authority to grant exemptive regulatory relief in the event of a pandemic. For example, under the proposed rule, NYSE may waive or extend the time otherwise applicable for complying with examination, training, or continuing education requirements. Although willing to consider regulatory relief, SEC staff indicated that market participants should not expect wide-scale waivers of important securities regulatory requirements. Although SEC staff told us that they recognize that some form of regulatory relief would most likely be part of the process of enabling the financial system to keep operating under the trying conditions of a pandemic, they also noted that such relief should be one of the last stages in continuity planning and preparation, not the first. Instead, they said that market participants should develop plans and capabilities for continuing operations during a pandemic that also would allow organizations to materially comply with important securities regulations. These areas included ensuring that broker-dealer personnel were properly supervised, necessary records prepared, and price transparency for securities maintained. Although broker-dealers are not required to be able to resume operations after disasters, securities regulators have issued some guidance and conducted some assessments of firms’ readiness to trade. As noted in our last report, SEC issued a policy statement in 2003 that established business continuity guidelines for the exchanges and electronic markets that match buy and sell orders for securities. This guidance expects these exchanges and markets to develop business continuity plans and be prepared to resume trading on the next business day following a wide-scale disaster. SEC examiners from the ARP program have been conducting examinations of the various markets subject to this policy statement to ensure that these entities had sufficient capabilities to conduct operations even if a wide-scale disaster damaged or rendered their primary operating sites inaccessible. Specifically, these SEC staff have determined that the two largest markets have implemented business continuity capabilities that likely would allow them to resume trading activities within one day of a disaster. Although SEC issued some guidance addressing business continuity expectations for exchanges and other trading venues, the firms that trade on U.S. markets are not required to ensure that they can resume operations after disasters. According to SEC officials, no provisions in the securities laws explicitly require that firms conducting securities activities be operational under all circumstances and resuming operations in the aftermath of a disaster would be a business decision left to the management of individual firms. Nevertheless, NYSE and NASD, which together oversee the majority of broker-dealers operating on U.S. markets, have issued rules that establish business continuity expectations for their members. These rules require broker-dealers to develop business continuity plans that address various areas, including data backup and recovery, and alternate means for communicating with customers. Although these rules do not require firms to be capable of resuming operations in the event of a disaster, NYSE staff that conduct reviews of their member firms told us that many firms are attempting to implement such capabilities for their own business reasons. If a firm were unable to develop sufficiently robust capabilities that would allow it to resume trading, the NYSE and NASD rules require that such firms must, at a minimum, have the capability to ensure that its customers would have access to their funds and securities. For example, NASD staff who oversee their member firms told us that some firms provide customers with contact information for their clearing organizations on customer account statements and firm Web sites. Based on reviews done by their examiners, NYSE and NASD officials reported that most of their member firms have implemented these business continuity planning rules, although larger firms generally were more likely to be compliant than smaller firms. SEC has undertaken some assessments of the readiness of broker-dealers to resume trading in the event of disasters and plans to conduct more specific examinations of broker-dealers’ capabilities in the future. In response to the recommendation in our last report that SEC fully analyze the readiness of the securities markets to recover from major disruptions, SEC staff told us that they have taken various actions to assess the ability of broker-dealers to resume trading promptly after disasters. Staff from SEC’s Market Regulation Division and Office of Compliance, Inspections, and Examinations told us that, in consultation with the other federal agencies and local emergency management officials in New York and Chicago, they have considered how a wide range of disaster scenarios would affect the securities markets. These scenarios include both a variety of man-made threats (including chemical, biological, and radiological terrorist events) and natural disasters (including a severe hurricane or a pandemic). According to SEC, the likely impact of these events will vary from scenario to scenario and from organization to organization. They also have had discussions with key broker-dealer market participants about their capabilities and plans for overcoming various disasters. For example, after publication of the Sound Practices paper, SEC staff conducted an analysis of the major firms to ascertain their willingness and ability to continue to trade in the event of a wide-scale disruption. SEC staff told us that these firms all expressed a commitment to continue to operate and have allocated substantial resources to enhance their resilience sufficiently to permit them to trade. Accordingly, SEC staff believe that market participants have increased their resiliency since September 11 and that based on this work sufficient numbers of firms and staff likely would be able to operate from various locations to allow U.S. markets to resume trading when appropriate. During discussions we had with SEC staff as part of this review, staff responsible for conducting broker-dealer examinations told us that their efforts since the 2001 attacks have been more focused on ensuring that firms were improving their capabilities for recovering their clearance and settlement activities, as required under the Sound Practices paper. However, based on our inquiries about trading readiness, SEC staff agreed that they could take further steps to assess broker-dealers capabilities in this regard. As a result, they developed an expanded examination module to obtain more detailed information on firms’ business continuity capabilities related to trading activities and have made this part of the existing examination guidance for the SEC examiners. SEC officials told us that they expect to use this expanded guidance in the applicable broker-dealer examinations beginning with the 2007 cycle. Since 2004, SEC has implemented various improvements to its ARP program, which oversees operations of automated and information technology systems at exchanges, clearing organizations, and electronic communications networks. In response to our past recommendations to SEC to expand the level of staffing and resources committed to the ARP program, SEC hired four new staff members during 2005, increasing the program’s staffing from 9 to 13. In addition, in response to our recommendation that SEC increase its overall technical expertise, all four of these newly hired staff have at least master-level degrees in information security-related fields. SEC has obtained funding to establish its own information security laboratory and is acquiring hardware that the agency can use to test systems and equipment being used by market participants and to help ARP staff learn about information security vulnerabilities and protection practices. To further improve the technical sophistication of the ARP examinations, SEC also began contracting with an information technology consulting firm to supplement its staff on information security reviews of the entities the ARP program oversees. During the last 2 years, staff from this consulting firm accompanied SEC staff on several reviews of the larger organizations, and our review of the reports that were prepared indicated that this firm’s assistance has helped SEC expand the range and breadth of issues that it reviewed during those examinations. In response to our prior concerns that SEC was not examining important market organizations frequently, staff responsible for the ARP program have changed their practices to increase how often they will conduct reviews of the more critical organizations. While we had previously reported that the intervals between examinations for many of the critical organizations had been as much as 3 years, ARP staff, since implementing the new practice, have been annually reviewing the organizations they consider most important. Our analysis of ARP report data from fiscal years 2003 through 2006 confirmed that the critical organizations under SEC’s jurisdiction were being reviewed at least annually. Furthermore, we reviewed the reports from the ARP examinations conducted between March 2004 and May 2006, and they indicate that the ARP staff generally were addressing all the key areas, including telecommunications, physical security, information security, and business continuity planning, during the examinations they have conducted. For example, we reported in 2003 that few of the ARP program examinations addressed physical security issues. During this period, we found that several of the organizations had hired an external consultant to review their physical security adequacy as a result of prior ARP staff recommendations. In addition, while we reported that SEC staff sometimes had problems getting organizations to implement ARP staff recommendations, our review of the latest examinations indicated that the organizations that SEC examined were implementing the ARP staffs’ recommendations appropriately. For example, in 6 of the 8 exams conducted in 2005, the examined organization had since taken actions sufficient to close all recommendations made previously. Although SEC appears to be getting adequate cooperation from the entities that it reviews as part of the ARP program, SEC currently administers the ARP program under policy statements on a voluntary basis. Consistent with one of our prior recommendations, staff in SEC’s Market Regulation Division told us that they continue to make progress in obtaining approval of a rule that will make adherence to the ARP program mandatory for affected organizations. SEC staff told us they have drafted a rule that will allow them to cite firms for rule violations if they fail to adhere to the expectations of the ARP program and assess penalties similar to other SEC requirements. The draft rule has been undergoing a series of internal reviews and staff expect to present it to the SEC Commissioners for issuance in spring 2007. Given the importance of the activities that the ARP program oversees to the U.S. securities markets, we continue to support making ARP a rule-based program to better assure that the SEC staff have the necessary leverage to ensure compliance with any recommendations they deem necessary for the continued functioning of the markets. Based on the series of reviews we conducted, the financial regulators and market participants have made considerable progress in the more than 5 years that have passed since September 11, 2001, in improving the security and resiliency of the U.S. securities markets against potential attacks and other disruptions. The critical exchanges and clearing organizations all have implemented increased physical security measures to reduce their vulnerability to physical attacks and reduced the vulnerability of their key information systems and networks to cyber threats. Most significantly, all of the organizations now have the capability to conduct their operations from backup sites that are at a significant geographic distance from their primary locations, a move that greatly reduces their vulnerability to even wide-scale disasters that affect their primary operating locations. During this period, financial market regulators also have contributed to the increased security and resiliency of the markets by actively overseeing and encouraging market participants’ efforts and by issuing guidance and conducting examinations. Although considerable progress has been made, regulators, participants, and others remain appropriately focused on various ongoing challenges. The need to assess and incrementally improve physical and information security measures remains constant as techniques for both attacking and protecting the critical assets of the financial markets will continue to evolve. With functioning telecommunications systems being vital to the markets’ ability to operate, efforts by regulators, market participants, telecommunications providers, and other government bodies to improve the availability and resiliency of this key infrastructure are critical. Finally, although SEC staff have assured themselves that key broker- dealers also were acting to improve their resiliency, we are encouraged by SEC’s recent plans to focus even greater attention on these efforts to ensure that sufficient numbers of such firms will be available to trade following future disasters. Although banking and securities regulators have taken various actions to help the financial markets prepare for and respond to an influenza pandemic, additional actions could further improve the readiness of the financial markets to withstand this threat. To their credit, financial market organizations have begun considering a range of issues related to pandemics and are working with others to improve readiness, such as by assisting with analyses of the capacity of the telecommunications infrastructure with relevant government agencies. However, at the time we visited them we found that few of the critical financial market organizations had completed the development of formal plans specifying the actions they would take and the capabilities and resources they would need to be able to continue their critical operations if significant numbers of their staff were ill or unavailable during a pandemic. When faced with the recognition that attacks or natural disasters could significantly disrupt market operations, financial regulators responded by issuing guidance and expectations—in the Sound Practices paper and in other policy statements—that specified the actions that market participants should take and set deadlines by which these actions should be taken. Although a pandemic could similarly disrupt financial organizations’ ability to operate, the regulators, although actively addressing pandemic issues, have not taken similar actions. Regulators indicated they are advising market participants in meetings and other forums to prepare plans that address the impacts of even a severe pandemic; however, these regulators have not issued any formal statements of these specific expectations. Without such official expectations, market participants may not adequately prepare plans that are sufficiently robust to address the more serious scenarios, which could include widespread illnesses, deaths, transportation bans, or telecommunications bottlenecks. In addition, the regulators have not set a date by which financial organizations should have their pandemic plans completed. Having plans that fully meet regulatory expectations in place before an outbreak would allow organizations to provide training to their employees and conduct tests and exercises of their plans that could provide valuable insights into how to further improve their readiness. Given that the severity of pandemic and the potential responses that governments or other organizations may take can vary, effective business continuity plans will have to be flexible by including a range of measures that financial market organizations can implement depending on circumstances, and these plans will have to be updated continually as new information arrives. Having such plans in place soon would help organizations to identify any additional resources needed, obtain the appropriate management approvals, and prepare their staff and customers for changes in how an organization may operate during a pandemic. While governmental bodies have not taken similar actions for other key sectors of the U.S. economy, such action by regulators of the financial sector could demonstrate the leadership that the sector is known for and serve to spur other sectors to accelerate their progress as well. To increase the likelihood that the securities markets will be able to function during a pandemic, we recommend that the Chairman, Federal Reserve; the Comptroller of the Currency; and the Chairman, SEC, consider taking additional actions to ensure that market participants adequately prepare for an outbreak, including issuing formal expectations that business continuity plans for a pandemic should include measures likely to be effective even during severe outbreaks, and setting a date by which market participants should have such plans. We provided a draft of this report to the Federal Reserve, OCC, Treasury, and SEC for their review and comment. In a letter from a Staff Director for Management, the Federal Reserve, the Comptroller of the Currency, and the Director of SEC’s Market Regulation Division, these officials indicated that they shared our views on the importance of ensuring that the financial markets enhance their resiliency (see app. II). In addition, they acknowledged that we recognized that the financial markets have made significant progress in increasing their ability to withstand wide-scale disasters. Regarding our recommendation that these regulators consider taking additional actions regarding pandemic preparedness—including issuing specific instructions that organizations plan for severe pandemics and setting a date by which business continuity plans for pandemics should be completed, the officials noted that the critical organizations and key market participants subject to the Interagency Sound Practices paper are planning for a pandemic, including a severe outbreak, and identifying measures to reduce their vulnerabilities to such events. They also noted that all of these organizations have been subject to supervisory review over the past several months, and that these organizations’ contingency plans generally address the four elements recommended in our report. The officials also indicate that their agencies have incorporated reviews of organizations’ pandemic planning efforts into their ongoing supervision and oversight processes to ensure that the critical market organizations are updating their plans as new information becomes available and incorporating lessons learned from market exercises. In their letter, the officials indicate that they will follow up to ensure any weaknesses in the ongoing pandemic-planning process are appropriately addressed by the organizations, and if the regulators find that organizations’ efforts are lagging, they will consider taking additional actions, including those that we have suggested. We are encouraged that the regulators plan to actively monitor the progress that critical organizations and key market participants are making to plan and prepare for a pandemic. Although the regulators maintain that organizations have prepared plans that address all expected elements, during the agency comments process we obtained the draft pandemic plan for one of the critical organizations. Based on our review, this organization's plan addressed some of the expected elements, but did not include the specific procedures that would be used to ensure that its critical operations would continue during a pandemic. The organization indicated these procedures would be described in business unit plans that were still being prepared. In addition, we recently recontacted representatives at five of the six key market participants that we had reviewed, and while most indicated that they had received sufficient instruction from regulators regarding pandemic expectations, staff at one firm told us that, although they had attended meetings with regulators on pandemic issues, they have not received any guidance on specific scenarios to plan for, such as transportation shutdowns. Because at least some organizations may not yet be fully prepared or potentially may fail to consider the potential pandemic scenarios associated with a severe outbreak, particularly if mitigating them is difficult and discourages or delays firms’ willingness to fully prepare, we continue to believe that having regulators give greater consideration to providing specific instructions to market participants and setting a date for having pandemic continuity plans complete would increase the likelihood that organizations fully prepare and have adequate time to test and adjust any planned responses in advance of the outbreak of an actual pandemic. We also received technical comments from Federal Reserve, OCC, SEC, and Treasury staff that we incorporated where appropriate. As agreed with your offices, unless you publicly announce the contents earlier, we plan no further distribution of this report until 30 days after the date of this report. At that time, we will send copies of this report to other interested congressional committees and the Chairman, Federal Reserve; the Comptroller of the Currency; and the Chairman, SEC. We will also make copies available to others upon request. The report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. The objective of this report is to describe the progress that financial markets participants and regulators have made since our 2004 report in ensuring the security and resiliency of our securities markets. Specifically, we assessed (1) actions critical securities market organizations and key market participants have taken to improve their business continuity capabilities for recovering from physical or electronic attacks and the security measures they use to reduce their vulnerabilities to such events; (2) actions taken by financial market participants, telecommunications industry organizations, and others to improve the ability of participants to respond to future disasters and increase the resiliency of the telecommunications on which the markets depend; and (3) financial regulators’ efforts to ensure the resiliency of the financial markets, including SEC’s progress in improving its securities market organization oversight program. To assess the actions that critical securities market organizations and key market participants took to improve their business continuity capabilities for recovering from physical or electronic attacks and the security measures they used to reduce their vulnerabilities to such events, we reviewed the actions of seven organizations whose ability to operate is critical to the overall functioning of the financial markets. To maintain the security and the confidentiality of their proprietary information, we agreed with these organizations that our report would not discuss their efforts to address physical and information security risks and ensure business continuity in a way that could identify them. To assess how these organizations ensured they could resume operations after an attack or other disaster, we discussed their business continuity plans and capabilities with their staff and visited their facilities. We compared their plans to practices recommended for financial organizations, including bank regulatory guidance. Among the operational elements we considered were the existence and capabilities of backup facilities, whether the organizations had procedures to ensure the availability of critical personnel and telecommunications, and whether they completely tested their plans. In evaluating these organizations’ backup facilities, we attempted to determine whether these organizations had backup facilities that would allow them to recover from damage to their primary sites or from damage or inaccessibility resulting from a wide-scale disaster. When possible, we directly observed the operation of these backup sites and reviewed relevant documentation, including backup facility test results that the organizations provided. To assess what critical organizations had done to minimize the likelihood that physical attacks would disrupt their operations, our staff that routinely conduct physical security reviews at government agencies and private organizations conducted on-site “walkthroughs” of the critical organizations’ facilities, reviewed their security policies and procedures, and met with key officials responsible for physical security to discuss these policies and procedures and compared these with guidance that the U.S. Department of Justice developed for federal buildings. Based on these and other standards, we evaluated the physical security efforts across several key operational elements, including measures taken to secure perimeters, entryways, and interior areas and whether organizations had conducted various security planning activities. To determine what the seven critical organizations did to reduce the risks to their operations from electronic attacks, our information technology security staff that routinely conduct information security reviews at government agencies and private organizations assessed progress made on issues previously identified in our past reviews and visited and reviewed documentation for the critical organizations’ system and network architectures and configurations. We also compared their information security measures with those recommended for federal organizations in the Federal Information System Controls Audit Manual, other federal guidelines and standards, and various industry best practice or principles for electronic security. Using these standards, we attempted to determine, through discussions and document reviews, how these organizations had addressed various key operational elements for information security, including how they controlled access to their systems, how they detected intrusions, and what assessments of their systems’ vulnerabilities they had performed. In addition to the critical organizations, we also obtained information from six large broker-dealers and banks that collectively represented a significant portion of trading and clearing volume on U.S. securities markets. At these organizations, we discussed their business continuity capabilities and reviewed documents where available. To determine how financial market participants, telecommunications industry organizations, and others improved the ability of participants to respond to future disasters and increased the resiliency of the telecommunications on which the markets depend, we reviewed documents and interviewed staff from financial market regulators, industry associations, and government agencies responsible for protecting critical infrastructure. Finally, we met with managers at a large telecommunications carrier to review how they were rebuilding local infrastructure in New York City. To assess financial regulators’ efforts to ensure the resiliency of the financial markets, including SEC’s progress in improving its oversight program, we reviewed relevant regulations and guidance and interviewed officials at SEC, the Board of Governors of the Federal Reserve System, Office of the Comptroller of the Currency, and the Department of Treasury. We also collected data on the examinations the regulators had conducted of exchanges, clearing organizations and banks, and broker- dealers and reviewed the examination reports for the examinations completed from 2004 through 2006. To assess the efforts of SROs to ensure financial market resiliency—including the New York Stock Exchange (NYSE) and NASD, which are responsible for overseeing their broker-dealer members—we reviewed SRO rules, interviewed NYSE and NASD officials, and reviewed the results of NYSE and NASD business continuity examinations of member firms. We also discussed initiatives to improve responses to future crises and improve the resiliency of the financial sector and its critical telecommunications services with representatives of industry trade groups, including the Bond Market Association, the Securities Industry Association, and ChicagoFIRST—a non-profit association that addresses homeland security and emergency management issues affecting Chicago’s financial institutions. We performed our work from April 2006 to February 2007 in accordance with generally accepted government auditing standards. In addition to the individual named above, Cody Goebel, Assistant Director; Edward Alexander; Gwenetta Blackwell Greer; Mark Canter; Lon Chin; West Coile; Caitlin Croake; Kirk Daubenspeck; Kristeen McLain; Angela Pun; Susan Ragland; and Barbara Roesmann made key contributions to this report.
This is GAO's third report since the September 11 terrorist attacks that assesses progress that market participants and regulators have made to ensure the security and resiliency of our securities markets. This report examined (1) actions taken to improve the markets' capabilities to prevent and recover from attacks; (2) actions taken to improve disaster response and increase telecommunications resiliency; and (3) financial regulators' efforts to ensure market resiliency. GAO inspected physical and electronic security measures and business continuity capabilities using regulatory, government, and industry-established criteria and discussed improvement efforts with broker dealers, banks, regulators, telecommunications carriers, and trade associations. The critical securities markets organizations GAO reviewed have acted to significantly reduce the likelihood of physical disasters disrupting the functioning of U.S. securities markets. As of January 2007, the seven critical exchanges, markets, clearing organizations, and payment processors GAO reviewed have the capability of performing their critical functions at sites that are geographically dispersed from their primary sites. These organizations were also preparing plans to reduce the likelihood that a disease pandemic will disrupt their critical operations, although not all had fully completed such efforts. They also improved their physical and information security measures, including by taking actions that GAO identified during this review. Although key securities trading staff remain concentrated in single locations, the broker-dealers and clearing services banks that account for significant trading volumes and that GAO reviewed have increased the distances between their sites for primary and backup operations for clearance and settlement activities and established dispersed backup trading locations. Various private and public sector groups continued to enhance the preparedness of the financial sector, although resolving vulnerabilities in the telecommunications infrastructure remains a challenge. Securities industry organizations have continued to conduct annual industrywide tests of financial market participants' backup site operating capabilities, and key trading and clearing organizations are increasingly using communications networks that are less vulnerable to disruption to transmit information. After attempts to assist individual financial market participants to determine whether their own telecommunications lines were routed through single paths or switches proved difficult, regulators are assisting efforts to develop automated systems for identifying circuit paths. In response to concerns over whether the telecommunications infrastructure can absorb the increased demand likely to result from large numbers of organizations and individuals seeking to telecommute during a pandemic, financial regulators and market participants are assisting government efforts to model such events and develop potential solutions. To improve market resiliency, financial regulators established goals for prompt recovery of critical clearing activities after disasters and have been conducting examinations to ensure market participants' compliance. Securities regulators also set goals and are examining securities markets' readiness to resume trading and plan to do more focused reviews of individual broker-dealer capabilities. The Securities and Exchange Commission (SEC) also has improved its program for overseeing operations issues at market and clearing organizations, including increasing its staffing levels and expertise. Securities and banking regulators have been actively addressing pandemic issues, but could better ensure that market participants prepare complete plans and have sufficient time to train employees and test these plans, by providing formal expectations that plans address even severe outbreaks and set dates for completing such plans.
Bankruptcy is a federal court procedure conducted under the Code. The goals of bankruptcy are to give individuals and businesses a “fresh start” by eliminating or restructuring debts they cannot fully repay and help creditors receive some payment in an equitable manner. The filing of a voluntary bankruptcy petition operates as an “automatic stay” that generally stops lawsuits, foreclosures, and most other collection activities against the debtor, allowing the debtor time to eliminate or restructure its debts. In bankruptcy, equitable treatment of creditors means that all creditors with substantially similar claims shall be classified similarly and receive the same treatment. For example, a class of secured creditors — those with liens or other secured claims against the debtor’s property— will receive similar treatment. Secured creditors are more likely to get some debt repaid than general unsecured creditors, and creditors generally receive payment of their debts before shareholders receive any return of their equity in the failed company. Business debtors that are eligible for protection under the Code may qualify for liquidation, governed primarily by Chapter 7 of the Code, or reorganization, governed by Chapter 11. Proceedings under both Chapters 7 and 11 can be voluntary (initiated by the debtor) or involuntary (generally initiated by at least three creditors). However, in an involuntary proceeding, the debtor can defend against the proceeding, including presenting objections within 21 days of being served the summons of the proceeding. The judge subsequently decides whether to grant the creditors’ request and permit the bankruptcy to proceed, dismiss the request, or enter any other appropriate order. A reorganization proceeding under Chapter 11 allows debtors, such as commercial enterprises, to continue some or all of their operations as a way to satisfy creditor claims. The debtor typically remains in control of its assets, and is called a debtor-in-possession (DIP). The court also, under certain circumstances, can direct the U.S. Trustee to appoint a Chapter 11 trustee to take over the affairs of the debtor. As shown in figure 1, a firm going through a Chapter 11 bankruptcy generally will pass through several stages. Each stage of the Chapter 11 process has key attributes: First-day motions. The most common first-day motions relate to the continued operation of the debtor’s business and involve matters such as requests to use cash collateral—liquid assets on which secured creditors have a lien or claim—and obtaining financing, if any. They may include a motion to pay the prebankruptcy claims of critical vendors—those deemed vital to the debtor’s continued business operations. Disclosure. The disclosure statement filed after the bankruptcy petition filing must include information on the debtor’s assets, liabilities, and business affairs sufficient to enable creditors to make informed judgments about how to vote on the debtor’s plan of reorganization and must be approved by the bankruptcy court. Plan of Reorganization. A debtor has an exclusive right to file a plan of reorganization within the first 120 days of bankruptcy. The court may not confirm the plan unless a sufficient proportion of allowed creditors has accepted the plan or would not be impaired by the plan. The court’s approval also depends on whether there are dissenting classes of creditors. If a plan has not been filed by the debtor within 120 days or accepted by a sufficient number of creditors after 180 days, any interested party—including creditors—may file a plan. The plan divides creditors into classes, prioritizing payments to creditors. Reorganization. Possible Chapter 11 outcomes, which can be used in combination, include (1) sale of the company (in whole or in part), which is sometimes called a section 363 sale because that section of the Code applies to sales that are free and clear of creditor claims and interests; (2) liquidation of the company’s assets with the approval of the court through means other than a 363 sale; and (3) actual reorganization of the company in which it emerges from bankruptcy with new contractual rights and obligations that replace or supersede those it had before filing for bankruptcy. The debtor, creditors, trustee, or other interested parties, may initiate adversary proceedings—in effect, a lawsuit within the bankruptcy case to preserve or recover money or property to subordinate a claim of another creditor to their own claims, or for similar reasons. Furthermore, the Chapter 11 trustee or others may bring a preference action (a type of avoidance action) challenging certain payments made by a debtor to a creditor generally within 90 days prior to the bankruptcy filing. In addition, fraudulent avoidance actions generally can be taken on transfers made within 2 years prior to a bankruptcy if payments are determined to be fraudulent. As such, an avoidance action can question the payment as a preferential or fraudulent transfer of assets and require payments to be returned to the debtor. Large, complex financial companies that are eligible to file for bankruptcy generally file under Chapter 11 of the Code. Such companies operating in the United States engage in a broad range of financial services including commercial banking, investment banking, securities and commodities trading, derivatives transactions, and insurance. Many of them are organized under both U.S. and foreign laws. The U.S. legal structure is frequently premised upon the ownership by a parent holding company of various regulated subsidiaries (such as depository institutions, insurance companies, broker-dealers, and commodity brokers) and other nonregulated subsidiaries that engage in a variety of financial activities. Many of these businesses have centralized business lines and operations that may be housed in a holding company or in one or more subsidiaries. Smaller banking institutions also are organized as holding companies, but many of these hold few, if any, assets outside a depository institution and generally engage in a narrower range of activities. Certain financial institutions may not file as debtors under the Code and other entities face special restrictions in using the Code: Insured depository institutions. Under the Federal Deposit Insurance Act, FDIC serves as the conservator or receiver for insured depository institutions placed into conservatorship or receivership under applicable law. Insurance companies. Insurers generally are subject to oversight by state insurance commissioners, who have the authority to place them into conservatorship, rehabilitation, or receivership. Broker-dealers. Broker-dealers can be liquidated under the Securities Investor Protection Act (SIPA) or under a special subchapter of Chapter 7 of the Code. However, broker-dealers may not file for reorganization under Chapter 11. Commodity brokers. Commodity brokers, also known as futures commission merchants, are restricted to using only a special subchapter of Chapter 7 for bankruptcy relief. Regulators often play a role in financial company bankruptcies. With the exception of CFTC and SEC, the Code does not explicitly name federal financial regulators as a party of interest with a right to be heard before the court. In practice, regulators frequently appear before the court in financial company bankruptcies. For example, as receiver of failed insured depository institutions, FDIC’s role in bankruptcies of bank holding companies is typically limited to that of creditor. CFTC has the express right to be heard and raise any issues in a case under Chapter 7. SEC has the same rights in a case under Chapter 11. SEC may become involved in a bankruptcy particularly if there are issues related to disclosure or the issuance of new securities. SEC and CFTC also are involved in Chapter 7 bankruptcies of broker-dealers and commodity brokers. In the event of a broker-dealer liquidation, pursuant to the SIPA, the bankruptcy court retains jurisdiction over the case and a trustee, selected by the Securities Investor Protection Corporation (SIPC), typically administers the case. SEC may join any SIPA proceeding as a party. The Code does not restrict the federal government from providing DIP financing to a firm in bankruptcy, and in certain cases it has provided such funding, as it did in the bankruptcies of General Motors and Chrysler with financing under the Troubled Asset Relief Program (TARP). The authority to make new financial commitments under TARP terminated on October 3, 2010. In July 2010, the Dodd-Frank Act amended section 13(3) of the Federal Reserve Act to prohibit the establishment of an emergency lending program or facility for the purpose of assisting a single and specific company to avoid bankruptcy. Nevertheless, the Federal Reserve may design emergency lending programs or facilities for the purpose of providing liquidity to the financial system. The federal government also has provided financial support to companies who later declared bankruptcy. For example, CIT Group, Inc. received funding from TARP in 2008. CIT subsequently declared bankruptcy under Chapter 11 in 2009 and was reorganized. Although the automatic stay generally preserves assets and prevents creditors from taking company assets in payment of debts before a case is resolved and assets are distributed in a systematic way, it is subject to exceptions, one of which can be particularly important in a financial institution bankruptcy. Commonly referred to as a safe harbor, this exception pertains to certain financial and derivative contracts, often referred to as qualified financial contracts (QFC). The types of contracts eligible for the safe harbors are defined in the Code. They include derivative financial products, such as forward contracts and swap agreements that financial companies (and certain individuals and nonfinancial companies) use to hedge against losses from other transactions or speculate on the likelihood of future economic developments. Repurchase agreements, collateralized instruments that provide short-term financing for financial companies and others, also generally receive safe-harbor treatment. Safe-harbor treatment was first added to the Code in 1982 for forward contracts, commodity contracts, and securities contracts. In a recent change, the Code’s definition of repurchase agreements was expanded (in 2005) to include, among other things, agreements for the transfer of mortgage related securities, mortgage loans, interests in mortgage- related securities or mortgage loans, and government securities issued by countries that are members of the Organisation of Economic and Co- operation and Development, thereby expanding the scope of contracts subject to the safe-harbor treatment. According to the legislative history, the purpose of these safe harbors is to maintain market liquidity and reduce systemic risk, which we define as the risk that the failure of one large institution would cause other companies to fail or that a market event could broadly affect the financial system rather than just one or a few companies. Under the safe-harbor provisions, most counterparties that entered into a qualifying transaction with the debtor may exercise certain contractual rights even if doing so would otherwise violate the automatic stay. In the event of insolvency or the commencement of bankruptcy proceedings, the nondefaulting party in a contract may liquidate, terminate, or accelerate the contract, and may offset (net) any termination value, payment amount, or other transfer obligation arising under the contract when the debtor files for bankruptcy. That is, generally nondefaulting counterparties subtract what they owe the bankrupt counterparty from what that counterparty owes them (netting), often across multiple contracts. If the result is positive, the nondefaulting counterparties can sell any collateral they are holding to offset what the bankrupt entity owes them. If that does not fully settle what they are owed, they are treated as unsecured creditors in any final liquidation or reorganization. Safe-harbor provisions also generally exempt certain payments made under financial contracts from a preference action seeking to recover any payment made by a debtor to a creditor generally within 90 days of filing for bankruptcy. In addition, they exempt fraudulent transfers made to financial contract counterparties generally within 2 years prior to a bankruptcy unless the payments are determined to have been intentionally fraudulent. Trustees cannot question the payment made in connection with these contracts as a preferential or fraudulent transfer of assets and cannot require the payments to be returned to the debtor. See appendix III for more information on the current safe-harbor treatment for derivative and repurchase agreement contracts. Experts at our roundtables evaluated proposals to change the roles of regulators in financial company bankruptcies. Specifically, they discussed proposals to require firms to notify and consult with regulators prior to a bankruptcy; allow regulators to commence an involuntary bankruptcy; provide regulators with standing or a right to be heard in bankruptcy court; and have regulators determine how subsidiaries might be consolidated in a bankruptcy. The experts noted that the proposals could have varying impacts on the bankruptcy process. For example, they viewed most of the proposals as having limited impact because regulators already have similar roles in bankruptcies, whereas efforts to consolidate subsidiaries in a bankruptcy would undermine key legal and regulatory constructs. Although experts broadly supported regulatory involvement in financial company bankruptcies, they said the proposed changes raise several implementation issues, such as determining the number of days prior to a bankruptcy that a company would be required to notify regulators and which regulator(s) to notify. As a result, the proposals require further consideration. FSOC, which is charged with identifying and responding to risks to financial stability that could arise from the failure of large financial companies, has been identified in some proposals as a regulator that should be notified. However, FSOC has not yet considered implications of changes to the role of regulators in the bankruptcies of financial companies. Several proposals have been made by financial and legal experts, as well as government officials, to further involve regulators in financial company bankruptcies. The experts at our first roundtable discussed four such proposals we identified in our 2011 study. Require debtors to notify and consult with regulators (primary, functional, Financial Stability Oversight Council, foreign, or other) in advance of filing for bankruptcy. Bankruptcy-related proposals introduced in the 111th Congress included a notification period. In prior work, we found that the notice period was intended to provide the regulator with some time to facilitate actions to minimize the systemic impact of the bankruptcy. During that time, the regulator might be able to find ways to maintain critical functions, facilitate an asset sale, identify potential creditors that would provide financing for the debtor, or determine if a proceeding under OLA would be more appropriate. This extra time for preparation could help to maintain the value of the institution and reduce systemic disruptions to the wider economy. Allow regulators to commence an involuntary bankruptcy if the firm is insolvent or in imminent danger of becoming insolvent. This proposal was included in the proposal made by the Hoover Institution resolution project group to have a separate bankruptcy chapter in the Code—Chapter 14—for large financial companies. The authors of that proposed chapter noted that under the existing Code, an involuntary bankruptcy proceeding can commence when a firm generally is not paying its debts as they become due unless the debts are subject to a legitimate dispute. For large financial companies, allowing involuntary bankruptcies in response to balance sheet insolvency may allow regulators to initiate a bankruptcy at a time when they could still limit the spread of damage to other financial companies. The Chapter 14 proposal specifically provides primary regulators power to commence an involuntary case against a financial company in the event that the firm’s assets are less than its liabilities, at fair valuation, or the firm has unreasonably small capital. Allow regulators of the debtor or its subsidiaries to have standing or a right to be heard in the courts to raise issues relative to regulation. Proposals introduced in the 111th Congress contained a provision to allow certain financial regulators the right to be heard during a bankruptcy case. The proposals granted the functional regulator, Financial Stability Oversight Council, Federal Reserve, Treasury, and any agency charged with administering a nonbankruptcy insolvency regime for any component of the debtor the right to be heard on any issue in a bankruptcy case. Experts have contended that regulated institutions have more complicated legal structures and products than others; thus, having regulatory expertise would provide more timely information to the judge and could lead to resolutions that better preserve asset value. Consider the role of regulators in determining what subsidiaries should be included in a bankruptcy proceeding and the extent to which complex firms might be consolidated in bankruptcy. This proposal would give regulators a role in determining whether the court should consider the filing of a financial company as a whole under processes similar to the doctrine of substantive consolidation—a rarely used procedure. In substantive consolidation, the intercompany liabilities of related companies are eliminated, the assets of these companies are pooled, and the companies’ liabilities to third parties are paid from the single pool of assets. The proposal also would give regulators a role in determining whether existing bankruptcy exclusions for insurance companies, broker-dealers, or commodity brokers should be maintained. The Hoover Institution resolution project group noted that these exclusions can complicate the resolution of a major financial institution, because the bankruptcy court can deal only with pieces of the firm. The experts at the first roundtable generally supported three of the four proposed changes to the role of regulators in bankruptcy proceedings, but noted that these proposals might have limited effects. None of the experts who responded to written questions indicated that requiring notice and consultation with regulators or granting regulators a right to be heard in bankruptcy court would greatly change the existing bankruptcy process. The experts noted that regulators already play these roles in financial company bankruptcies. In response to the proposal to require notice to regulators, the experts generally agreed that regulators and financial companies usually have a great deal of communication and involvement, particularly when an institution is experiencing financial difficulties. One expert worried that requiring notice to the regulator before filing for bankruptcy might allow regulators to prevent the debtor from filing and adversely affect recoveries for creditors. In relation to regulatory authority to compel involuntary filings, the experts who specifically addressed this proposal said that regulators already have ways of forcing a financial company to file for bankruptcy through their existing regulatory powers. A few experts said that regulators can use the threat of placing the firm into FDIC receivership under OLA if the firm does not file voluntarily for bankruptcy. One expert expressed the view that once living wills are in place, regulators may compel a financial company to execute its resolution plan by filing for voluntary bankruptcy. Regulators also can take other actions. For example, under the statute, the Federal Reserve and FDIC may jointly take corrective action, including ultimately requiring the divestiture of certain assets, if they jointly determine that a firm has not been able to submit a plan that meets the statutory criteria. Under SEC and CFTC rules, an undercapitalized securities broker-dealer or commodity broker cannot operate and must therefore be liquidated. One expert with whom we spoke said that even if regulators were given an explicit right to place a firm in involuntary bankruptcy, they would be unlikely to use that authority. In response to the proposal to give regulators an explicit right to be heard, experts who addressed the issue said regulators are routinely heard by the court in bankruptcy proceedings. And as noted previously, SEC and CFTC already have legal standing in some cases. Court officials said they were not aware of an instance in which a regulator was denied the right to be heard by the court. However, experts also said making this an express right might have benefits, which we discuss later in this report. Although experts favored most of the regulatory proposals, they were opposed to having regulators decide whether a firm should be resolved on a consolidated basis and noted that these changes would undermine key legal and regulatory constructs. One expert noted that the idea undermined the concept of having corporate separateness for subsidiaries. Corporate separateness is generally the principle that a parent corporation is not liable for actions taken by its subsidiaries. Another expert noted that encouraging substantive consolidation as determined by the regulator could have a negative impact on the predictability and transparency of the bankruptcy process, detracting from the orderliness and effectiveness of that process. A third expert noted that treating the legal entities of a financial company in bankruptcy on a consolidated basis would conflict with the U.S. regulatory structure, which is designed around separate legal entities, such as depository institutions, broker-dealers, and insurance companies. However, companies continue to manage themselves along business lines that cut across legal entities. A regulatory expert said that removing the exemption for securities broker-dealers and commodity brokers from bankruptcy could undermine the purpose of the regulatory construct applied to those entities and the ability of regulators to protect customers’ assets. An expert noted that overriding state insurance regulators could lead to intensive litigation. Additionally, NAIC and state insurance officials said that the priority structure for bankruptcy is inappropriate for insurers because the primary goal in the resolution of an insurance company is to protect the policyholders. Because of this, policyholders generally receive priority over creditors in an insurance receivership beyond any claims supported by collateral. Experts at our roundtables also broadly discussed the proposals in relation to criteria for orderly and effective bankruptcies (including minimizing systemic risk and promoting due process). Most fundamentally, these experts had differing views on whether bankruptcy, as currently construed, was an appropriate vehicle for minimizing systemic risk. Some participants at the roundtable raised issues about whether the court could act quickly enough to stem systemic spillovers from the debtor company to other companies and markets. They noted other potential trade-offs. For example, to act quickly in cases involving large and complex financial companies, courts might need to shorten notice periods and limit parties’ right to be heard, which could compromise due process and creditor rights. Similarly, one participant said that if the goal was to turn the Code into an effective resolution tool, the fundamental balance of power among debtor, creditor, and regulator might need to be altered. Another was concerned that if regulators become more involved in bankruptcy cases, courts might defer to them over other parties, undermining the ability of creditors to argue their cases. However, a legal expert at the roundtable doubted that the courts would be overly solicitous to regulators. Another legal expert noted that regulators could enhance due process by educating the court and providing a method for verifying information provided by the financial institution. One of these participants noted that standards for an involuntary bankruptcy initiated by the regulator might require a new definition for insolvency that would consider both regulatory and systemic interests. Nevertheless, many of the experts indicated that regulatory involvement in bankruptcies was consistent with minimizing systemic risk. These experts said that regulators do and should have influence in times of crisis and that commencing a bankruptcy without regulatory involvement could be problematic. Additionally, some of the experts at the roundtable noted that regulators ought to have the power to compel a financial firm to file for bankruptcy because, as one regulatory expert said, allowing a financial firm to continue to do business when it is in vulnerable financial condition would likely add to concerns for systemic risk. Although experts generally supported proposals to change the roles of regulators, they said implementing the proposals relating to notice and involuntary proceedings could be difficult. Experts at our roundtable said that determining the correct number of days for notification to the regulator would be difficult. For example, requiring a financial institution to provide notice to and consult with regulators 10 days in advance of filing for bankruptcy—the number of days specified in proposals introduced in the 111th Congress—might not work in practice. One expert said that 10 days can be a long time in a financial crisis. Another noted that the firm’s need to file for bankruptcy might arise very quickly and that a firm might only be able to notify its regulator a day or two in advance of its filing. As an example, an expert noted the rapid collapse of the investment firm Bear Stearns and Co. In 2008, senior management of Bear Stearns gave the Federal Reserve Bank of New York a 1-day notification, saying that the company would file for bankruptcy protection the following day unless it received an emergency loan. In the failure of Lehman Brothers, the abruptness of the company’s bankruptcy did not allow much time for attorneys to prepare for filing. Another expert said that a requirement to “notify and consult” with the regulator before entering bankruptcy should not interfere with the ability of a company to file for bankruptcy. Determining which regulators to notify also may be difficult. Complex financial companies and their subsidiaries may have many regulators domestically and internationally. As a result, determining which regulator a bank holding company or nonbank financial company would notify if a domestic or foreign subsidiary were nearing insolvency is not clear. One expert noted that because large financial companies have many regulators, before a firm could file for bankruptcy it would be important to identify in advance which regulators to notify. Proposals introduced in the 111th Congress would have required that a nonbank financial company consult with its functional regulator, FSOC, and any agency charged with administering a nonbankruptcy insolvency regime for any component of the debtor firm, which could be a large number of regulators. The proposals define functional regulator as the federal regulatory agency with the primary regulatory authority, such as an agency listed in section 509 of the Graham-Leach-Bliley Act. Some roundtable experts said that prebankruptcy consultation should be with the firm’s primary regulator, although none of them defined this term. FSOC—which under the Dodd-Frank Act is charged with identifying and responding to risks to U. S. financial stability—was included as a regulator in the notification and consultation proposal. Treasury officials, including those who support FSOC, interpret the Dodd-Frank Act as having a preference for resolving financial companies through bankruptcy and said that FSOC has focused its activities on implementing its responsibilities under the act. Furthermore, in its annual reports FSOC has described the role that resolution plans are supposed to play in fostering orderly resolutions under the Code. Specifically, under the Dodd-Frank Act, bank holding companies with total consolidated assets of $50 billion or more and nonbank financial companies designated by FSOC for enhanced supervision by the Federal Reserve are required to submit resolution plans to the Federal Reserve, FDIC, and FSOC. FSOC’s 2013 Annual Report included a recommendation that the Federal Reserve and FDIC implement their resolution plan authorities in a manner that better prepares firms and authorities for a rapid and orderly resolution under the Code. However, in our discussion with Treasury officials, including those who support FSOC, they noted that FSOC does not routinely evaluate proposals that could alter the role of regulators in the bankruptcy process or other changes to the Code that might reduce systemic risk, such as narrowing the safe harbor treatment of QFCs. While current law does not specify a role for FSOC related to the potential filing of a bankruptcy by a systemically important financial company, when MF Global declared bankruptcy, FSOC met in emergency session to monitor the event and subsequently reported that the MF Global bankruptcy had not roiled markets. Treasury officials and staff that support FSOC said that FSOC is focused on implementing provisions in the Dodd-Frank Act. Since helping to develop rules to implement OLA is explicit in the Dodd-Frank Act, FSOC has described activities related to these provisions and made recommendations—but has not considered the implications of changing the role of regulators under the Code. Although the Dodd-Frank Act does not amend the Code or explicitly call for FSOC to consider such changes, changing the role of regulators could potentially impact FSOC’s ability to identify and respond to systemic risks in a timely fashion. The roundtable experts noted that allowing financial regulators to initiate an involuntary bankruptcy for financial companies raised a number of implementation questions including appropriate time frames and standards. These experts generally agreed that lengthy time frames included in the rules for an involuntary bankruptcy filed by a creditor could reduce the value of a systemically important financial institution and endanger market stability. However, one expert expressed concern over the possibility of regulators acting too quickly to place an institution in bankruptcy, especially during a financial crisis in which asset valuations might be in dispute. A legal expert noted that considering what the appropriate standard for placing a financial institution in bankruptcy would be was important. The expert noted the difficulty of distinguishing between an insolvent company and one experiencing temporary liquidity needs. Another expert proposed that a bankruptcy initiated by the regulator should require a standard similar to the standard in place for placing a firm in FDIC receivership under OLA. The regulators at the roundtable thought that a regulatory framework that required firms to meet certain standards or be placed in bankruptcy—as currently exists for commodities brokers and securities broker-dealers—might alleviate some of the disadvantages posed by the creditor rules and would not necessarily require a change in the Code. One criterion for an effective bankruptcy or resolution process is to limit taxpayer liability. Legislators have made proposals to limit the ability of the Treasury or the Federal Reserve to help finance bankruptcies of financial companies. For example, proposals introduced in the 111th Congress specifically would have forbidden the U.S. Treasury and Federal Reserve from participating in bankruptcy financing. However, some proposals recognize the difficulty of financing bankruptcies of large financial companies, especially during a crisis. The Chapter 14 proposal made by the Hoover Institution resolution project group would allow the government to provide subordinated DIP financing to companies with assets greater than $100 billion (subsidiaries included) with a hearing and the court’s approval and oversight. Experts at our roundtable discussed the appropriate role of the government in providing financing for firms in bankruptcy. Experts at our roundtables emphasized that many of the proposals to make the bankruptcy process more orderly and effective depend on having an adequate funding mechanism. As a result, experts at the first roundtable generally agreed that changing the Code to prevent any federal funding of these bankruptcies would not be consistent with orderly and effective resolutions. In their written responses to a question asking what the most important changes would be to achieve most of the elements of an orderly and effective bankruptcy, experts most consistently responded that proposals to provide adequate funding, rather than to restrict it, were the most important changes that could be made. All but one of the eight experts responding put providing a funding source as the most important change to avoid fire sales. Experts said that support for federal funding rested on two propositions. First, voluntary private funding likely would be unavailable to finance the bankruptcy of a systemically important financial company. Second, the government should distinguish between funding for a bailout and funding that provided short-term liquidity. Experts did not think that voluntary private funding would be available to finance a systemically important financial company because these companies are large and some of them grew substantially over the course of the financial crisis (see table 1). Solutions that were possible during the crisis, such as JPMorgan Chase providing funding for Bear Stearns, or Barclays’ purchase of parts of Lehman, would be unlikely in the future because some firms have gotten much larger. Experts also noted that obtaining funding would be especially difficult during a period of general financial distress when firms large enough to provide funding might be experiencing difficulties themselves. Several experts noted that any government funding would need to distinguish between bailing out an insolvent company, which they opposed, and providing short-term liquidity for a solvent company providing collateral, which they generally supported. One of the legal experts defined a bailout as the government putting in equity capital to support existing creditors. Legal and academic experts at our roundtables compared the provision of fully secured, liquidity funding with providing lender-of-last resort funding. They referred specifically to the Federal Reserve providing short-term liquidity through its discount window to solvent depository institutions with eligible collateral to secure the loan. The Federal Reserve accepts a very broad range of collateral to secure such loans. Our roundtable experts generally agreed that funding for liquidity needs was essential and noted that in a period of financial distress the federal government might be the only entity with enough resources to provide such funding. Although experts at the roundtables did not think voluntary private funding likely would be available for financing or other liquidity support during the bankruptcy of a large financial company, they did consider whether the industry as a whole might provide such support. They noted several options for such funding. The industry could create a fund or mechanism for providing liquidity to firms that needed it. The government could assess companies prior to a bankruptcy as it does for the deposit insurance fund. The government could raise funds through postbankruptcy assessments, while meeting immediate needs through temporary federal funding as with the Orderly Liquidation Fund under Title II of the Dodd-Frank Act. Under OLA, the Treasury may make funds available through an Orderly Liquidation Fund to FDIC as the receiver of a covered financial company. A few of the experts noted that some government guarantees might facilitate private-sector financing. As with many of the proposals, our roundtable experts noted that implementing a proposal to allow fully secured federal funding for liquidity needs raised some difficulties. First, they noted the difficulty of distinguishing between an insolvent company and one experiencing temporary liquidity needs. This distinction is particularly difficult in a period of financial stress when valuation of assets may be difficult. For example, the value of some of Lehman Brothers Holding, Inc.’s (LBHI) real estate assets has increased since the time of its bankruptcy in 2008. Second, experts at the first roundtable noted that the Dodd-Frank Act amendments to section 13(3) of the Federal Reserve Act might apply to some Federal Reserve funding related to a bankruptcy. This provision restricts the Federal Reserve from providing funding to a single distressed company but would allow it to provide funding to the financial system. Similar funding provided under the Primary Dealer Credit Facility in September 2008 (prior to the Dodd-Frank Act amendments), allowed Lehman Brothers, Inc. (LBI)—the broker-dealer and commodity broker subsidiary of LBHI—to remain a going concern after LBHI declared bankruptcy, thus facilitating the transfer of some assets to Barclays later that week. The remaining parts of LBI were liquidated in a SIPA proceeding. Under the terms of the loans provided through the Primary Dealer Credit Facility, the Federal Reserve Bank of New York became a secured creditor of the firm, giving it higher priority in the event of a bankruptcy. We found in 2011 that LBI and Barclays had repaid their overnight loans with interest, according to Federal Reserve officials. One legal and financial expert suggested that the Federal Reserve would be in compliance with the amendments to section 13(3) if it set up a fund for firms being resolved under the Code for large financial companies. Third, experts noted that determining what types of assets firms could use to collateralize government or industry funding might be difficult. Although the Federal Reserve had accepted assets with significant tail risk (the probability of a rare event occurring that would result in great losses) as collateral during the crisis, experts noted that such risky assets might not be acceptable in the future. We asked the experts at our first roundtable to discuss the advantages and disadvantages of the proposal made by the Hoover Institution resolution project group that calls for using subordinated government debt to provide payments to certain short-term creditors early in a bankruptcy proceeding. Such subordinated loans would be repaid with a lower priority than that of other creditors. The proposal further proposes a “claw-back” procedure if the preferred creditors have received more than they were entitled to when the reorganization or liquidation is finalized. The proposal was made to stem systemic concerns—the failure at one financial company spreading to others because short-term creditors would not have access to funds. Reliance on short-term funding exacerbated the financial crisis of 2007-2009. And as has been noted by some Federal Reserve officials, regulatory reform has not yet addressed the risks to financial stability posed by short-term wholesale funding. Legal experts at the roundtable agreed that such payments could be made by treating certain short-term creditors as critical vendors during first-day motions. However, experts who discussed this issue at the first roundtable said that making decisions about providing funding to certain short-term creditors during a bankruptcy was not the best way to address systemic concerns associated with short-term liquidity. They noted that such a proposal would increase uncertainty for creditors during a bankruptcy proceeding. Two experts noted that they would not want to use subordinated federal funding. Another explained that the point of subordinating the funding is to help ensure that the government uses such funding to address concerns about liquidity rather than to defray certain creditors’ losses. However, such funding would expose taxpayers to potential liability. Instead, those experts who discussed this proposal at the first roundtable said that changing the Code to give an explicit priority to short- over long- term creditors would be preferable. They noted that an explicit priority would be a better option in that it would help to address systemic risk and lead to a more predictable bankruptcy process. In addition, such a priority might provide an incentive for firms to continue to provide short-term funding when a financial company experiences distress. One legal expert noted that the special bankruptcy laws for railroads had a provision that any creditor providing funding in the 6 months leading up to a bankruptcy had priority over other creditors in that bankruptcy proceeding. This type of provision might have created an incentive to provide funding to a railroad experiencing short-term financing issues and thus, might have prevented a bankruptcy. However, a legal expert at our second roundtable said that this would create unfair treatment for creditors providing long-term financing, because long- and short-term creditors were members of the same creditor class. While a priority for short- over long-term creditors might reduce the incentive to withdraw funding leading up to a bankruptcy and reduce the likelihood of systemic issues associated with liquidity shortages during a bankruptcy, it could have additional consequences. For example, such a priority would provide more of an incentive for creditors to provide short- rather than long-term funding. If there were less likelihood that these short-term creditors would lose their funds in the case of a default because they had priority over other creditors, they might be less likely to monitor the credit-worthiness of borrowers. As a result, the market might be less likely to discipline companies that take on excessive risk. Although promoting market discipline is not among the criteria we identified for orderly and effective bankruptcies, it is a goal of the Dodd- Frank Act. Experts at our roundtables evaluated proposals to change the treatment of certain QFCs relative to criteria for orderly and effective financial company bankruptcies. Specifically, they discussed the effects of proposals for removing all safe harbors for QFCs; partially rolling back safe harbors on specific contracts; implementing a temporary stay for all or certain contracts; and allowing trustees to “avoid” contracts entered into within specified periods prior to the bankruptcy filing if they are determined to be preferential or fraudulent. The experts generally agreed that limiting safe-harbor treatment would affect derivative and repurchase agreement markets and could limit short-term funding options for financial companies especially in periods of distress. However, the experts had differing views on the advantages and disadvantages of the proposals, and those views are still evolving as lessons learned from the treatment of these contracts during the Lehman Brothers bankruptcy remain unclear. The roundtable experts generally agreed that limiting the safe-harbor treatment—removing it all together or providing it to a more limited set of contracts—would reduce the use of derivatives and repurchase agreements. Some experts have noted that these markets grew substantially after additional types of contracts were granted safe-harbor treatment in 2005 (see fig. 2). However, one expert we spoke with noted that in his opinion the industry has tended to overstate the impact that limiting the safe harbors would have on the size of the markets, which the expert thought would likely be minimal. Several of the roundtable experts thought that if downsizing these markets was a goal, it should be done directly through regulations rather than through changes in the Code. For example, the experts noted that derivatives markets have been undergoing vast change as a result of requirements in the Dodd-Frank Act (such as requiring certain contracts to be tracked more effectively and traded on exchanges). However, another expert noted that it would be good if the Code were consistent with regulatory goals. Limiting the safe harbors would reduce the availability of short-term funding for financial companies. Short-term funding for financial companies creates flexibility, but, at the same time, it sets the stage for potential runs on firms. As figure 3 shows, there was little consensus in written responses provided by our roundtable experts on how, if at all, changes in QFC treatment under the Code would affect the orderliness and effectiveness of financial company bankruptcies (see app. II for more detailed information on the proposals). However, most of our roundtable experts responded that removing all of the safe harbors would detract from orderliness and effectiveness and none of them responded that this would greatly enhance orderliness and effectiveness. For the other proposals, the experts were split fairly evenly in their written responses between those who thought the proposal would enhance the orderliness and effectiveness and those who thought it would detract from orderliness and effectiveness. Many of the experts who thought allowing trustees to “avoid” contracts would detract from orderly and effective bankruptcies chose “greatly detract.” Generally, those experts representing industry interests noted that the proposals would detract from orderliness and effectiveness, and those in favor of adopting certain proposals thought that industry opposition would be difficult to overcome. Experts at the roundtable noted that even if there was high-level agreement on what changes to the Code were needed, legal experts might disagree on the precise details. For example, with regard to the safe-harbor exemptions from avoidance actions—trustees’ ability to “avoid” transfers entered into in the 90 days prior to a bankruptcy if they are determined to be preferential or up to 2 years prior to a bankruptcy if they are determined to be fraudulent—some legal experts at the second roundtable said that the courts were giving preferential treatment to contracts that in principle should not be receiving it. Specifically, they said that the courts were interpreting section 546(e) of the Code in a way that allows contracts that otherwise might be considered preferential or fraudulent to remain in force. As a result, they noted that changes to the Code might be made to tighten that section. For example, a roundtable expert said that section 546(e) of the Code should be changed so that fictional transactions, such as Ponzi scheme payments, would not receive such treatment. Another legal expert cited a number of cases in which contracts entered into within 90 days prior to the bankruptcy filing, which would be considered preferential without the safe-harbor exemption, were being given safe-harbor treatment. For example, in the bankruptcy case of communications company Quebecor, insurance companies that held private placement notes that qualified for safe-harbor treatment had received 105 cents on the dollar while other unsecured creditors received a fraction of a dollar. The expert and others said that it might be useful to allow a judge to make decisions relative to some contracts. However, one expert at the roundtable noted that this could be a very long, complex process. In addition, allowing the judge to decide which contracts would get safe-harbor treatment when counterparties defaulted would increase the uncertainty attached to those contracts. Our roundtable experts also varied in their evaluations of the proposals relative to some of the specific criteria we had identified for orderliness and effectiveness such as limiting systemic risk, avoiding fire sales, maximizing value, and preserving due process. When explicitly asked, some experts responded that limiting the safe harbors would increase systemic risk, while others responded that limiting them would reduce it. Such a dichotomy could result from differences in the way the experts viewed markets. Having the safe harbors likely increases dependence on short-term funding and thus increases the chance for a run if questions arise about a company’s financial soundness. In addition, needing to sell off assets because of a lack of funding could lead to a spiral of falling asset prices. However, safe harbors are also thought to limit systemic effects before and during a bankruptcy. According to an expert at the second roundtable, if counterparties are certain about the safe-harbor treatment of their contracts, such treatment may limit runs prior to bankruptcy because counterparties know they will be able to terminate or liquidate their positions in case of default. In addition, the safe harbors primarily exist to limit market turmoil during a bankruptcy—that is, they are to prevent the insolvency of one firm from spreading to other firms and possibly threatening the collapse of an affected market. Although FSOC has reported on threats to financial stability from derivative and repurchase agreement markets, as with proposals to change regulators’ roles under the Code, they have not considered the implications of potential changes to the safe-harbor treatment of these contracts during bankruptcy. The roundtable experts made a number of specific points relative to the impact of QFC treatment on systemic risk and fire sales of assets. One expert at the second roundtable noted that during the early days of the Lehman Bankruptcy, he thought that the QFC terminations would lead to a systemic event in derivatives markets, but that did not happen. The expert questioned whether the lack of a systemic event reflected Lehman’s small share of the market—5 percent—or the safe-harbor protection. In contrast, the commercial paper market did experience a systemic event—becoming illiquid after the Lehman bankruptcy. However, another participant noted that it was not the claims process in a bankruptcy that caused systemic risk; it was the uncertainty, the effect on counterparties, and market reactions. Roundtable participants also discussed the likelihood that safe-harbor treatment or bankruptcy in general could create asset fire sales. One expert noted that fire sales were more likely to occur in the period leading up to bankruptcy rather than after the bankruptcy was filed. Another industry expert noted that some unpublished research suggests that fire sales of Lehman’s assets that might have resulted from the treatment of QFCs did not take place following the bankruptcy filing. Instead, counterparties terminated only those contracts that had maintained their value. Roundtable experts noted that conflicts might arise depending on whether the goal of a bankruptcy proceeding was to maximize value for the economy, for the debtors, or for the creditors. One legal expert noted that in a time of financial crisis, balancing market expectations and needs against the needs of an individual company was difficult. Debtors usually are expected to fare best when companies can be reorganized under Chapter 11. Under Chapter 11, the purpose of the automatic stay is to preserve the value of companies while debtors consider their options. However, one roundtable expert noted that with the rapid dissolution of value for a financial company as a result of the safe harbors, liquidation is a more likely outcome than reorganization. Another expert noted that even if QFCs were stayed, value could dissipate quickly in financial company bankruptcies because that value rests on the confidence of the debtors’ counterparties. In addition, one expert raised concern about the impact of the safe harbors on the remaining value for creditors after QFC positions were terminated. In a bankruptcy, creditors compete with counterparties to derivative contracts and repurchase agreements for a firm’s assets. Allowing the QFCs to be terminated while other debts are stayed means there are fewer assets available for those creditors. However, since creditors know that they are less well protected in bankruptcy, they should command a higher price for the risk they are taking when they provide credit. So, determining whether creditors are being disadvantaged overall is difficult. Roundtable participants also discussed whether a temporary stay for QFCs would enhance the value of a financial company; however, as noted earlier, they were split on whether this would contribute or detract from the overall orderliness and effectiveness of financial company bankruptcies. For example, while several experts said that a temporary stay might facilitate a sale of a company’s derivatives to a third party, the sale would increase concentration in the market and ultimately contribute to greater overall systemic risk. Other experts agreed that a temporary stay would be useful only to the extent that an exit strategy, such as selling to a third-party buyer, was available or a bridge company—which is a temporary company used to maintain the failed company’s operations—could be constructed. These experts cited the case of General Motors as an example of what they were suggesting. However, the newly formed company in the case of General Motors was not temporary. In contrast, one expert presented a hypothetical example that did not involve a sale of the whole entity to a third party or the construction of a bridge company. In this example the judge would have from a 10 to 12 day stay, which might allow the judge to dispose of pieces of the company, leaving a small enough entity that its assets could be liquidated through normal bankruptcy proceedings. However, other experts noted that it might be difficult to determine what the appropriate number of days for a temporary stay might be. Several of the experts at our roundtables questioned whether bankruptcy reforms designed to deal with systemically important financial companies would adequately protect due process given the need to move quickly in such a bankruptcy. They suggested that due process might be compromised or would depend on the ability of counterparties and creditors to take action after regulators or courts make decisions (as is the case with OLA). For example, if preferences were given to some counterparties or creditors during a temporary stay, other counterparties or creditors would have the right to take action to recover value or “claw back” value later in the process, as opposed to having a judge consider the views of all of the parties prior to making any decisions. Roundtable experts noted that some changes to the Code relative to the treatment of QFCs could create uncertainty in the process. Specifically, counterparties need certainty about bankruptcy treatment when they enter a contract. To provide that certainty, several experts agreed that changes should be detailed in the terms of the contract rather than determined at the time of the bankruptcy. However, one of the experts noted that even with provisions specified in the Code, counterparties might still be uncertain for some time about how certain contracts would be treated. Although the Code had been amended in 2005 to extend safe- harbor treatments to more types of repurchase agreements, that expert said that uncertainty as to how the courts would treat repurchase agreements contributed to the Lehman Brothers bankruptcy. Leading up to the bankruptcy, counterparties were unwilling to extend new short-term funding because of the uncertainty—essentially precipitating a run on the firm. Our roundtable experts noted other issues that would arise relative to making any changes in the Code, such as whether contracts that already existed would be processed under the Code at that time or under the new Code. One expert said that contracts should be grandfathered, while another pointed out that the grandfathered contracts might be around for another 30 years, creating other difficulties. While it is difficult to assess how many contracts would be long term, key contracts are thought to be used for overnight funding. When the 2005 changes were made to expand the contracts receiving safe-harbor treatment, the new treatment applied to all contracts, including those that had been entered into prior to that time. Some roundtable experts further suggested that not knowing which judge will have a case and how that judge will make decisions can introduce additional uncertainty into the treatment of certain contracts. Not knowing whether a qualified financial contract would be subject to the Code or OLA creates further uncertainty about how a contract will be treated. Under the latter, FDIC becomes the receiver of the company and QFCs are stayed for 1 business day. During that day, FDIC has an opportunity to transfer a company’s derivatives to a third-party or bridge company. Under OLA, FDIC can choose to transfer contracts with one company to the bridge company while choosing not to transfer those with another company. However, if FDIC chooses to transfer a contract with a specific company, it would have to transfer all of the contracts with that company. There was some presumption among roundtable participants that very large systemically important institutions would be resolved under OLA rather than through bankruptcy. However, FDIC officials testified before the Subcommittee on Oversight and Investigations of the House Committee on Financial Services in April 2013 that under the Dodd-Frank Act, bankruptcy is the preferred resolution framework in the event of a failure of a systemically important financial company. Experts at our roundtable said that the lessons learned from the Lehman bankruptcy that might be applied in considering changes to the safe harbors are still unclear. Early reports and statements about the LBHI bankruptcy said that in the first 5 weeks after LBHI filed for bankruptcy, approximately 80 percent of its derivatives counterparties terminated contracts that were not subject to the automatic stay. However, some of the initial counterparty claims have been found to have been overstated. Two experts at our second roundtable specifically noted that the large initial loss in value was, in part, the result of LBHI counterparties’ initially overstating their claims against LBHI, and subsequently some of these claims have been overturned in adversary proceedings. For example, Swedbank AB, a Swedish bank, that was a creditor of LBHI, sought to offset Lehman’s payment obligations under prepetition swaps with deposits Lehman had made at Swedbank after filing for bankruptcy. The Bankruptcy Court of the Southern District of New York ruled against Swedbank, holding that the post petition deposits could not be used to offset prepetition swaps. In another proceeding involving the Lehman bankruptcy, a lender, Bank of America, seized the debtor’s account funds, which were unrelated to any safe-harbor transaction, to set off certain contracts that could receive safe-harbor treatment. The court ruled that the bank’s use of the funds to set off the transactions violated the automatic stay. Further, some experts no longer supported proposals they had originally made in response to Lehman’s early perceived losses. As a result, experts continue to weigh whether changes to the treatment of derivatives and repurchase agreements under the Code are needed. The Hoover Institution resolution project group continues to discuss their proposals and plans to issue additional publications on their Chapter 14 proposals. The American Bankruptcy Institute has a Commission to Study the Reform of Chapter 11 and has appointed advisory committees to consider various aspects, including the treatment of QFCs. Its work is expected to continue for some time. Throughout the roundtable discussion, the participants noted that changes to the Code should not be made without considering ongoing changes in the broader legal and regulatory environment for derivatives. Specifically, they noted that the Dodd-Frank Act calls for a number of significant changes in the regulation of derivatives that are still being implemented, and the industry is looking at potential changes to derivatives contracts. Finally, experts noted the need to make changes consistently across international borders, especially in the United States and United Kingdom. During the Lehman Brothers bankruptcy, differences in the treatment of various contracts caused courts in the United States and United Kingdom to rule in opposing ways on the same contracts. The financial crisis and the failures of some large financial companies raised questions about the adequacy of the Code for effectively reorganizing or liquidating these companies without causing further harm to the financial system. Although the Dodd-Frank Act created OLA, an alternative resolution process, filing for bankruptcy under the Code remains the preferred resolution mechanism even for systemically important financial companies. Some proposals to modify the Code recognize that currently the Code may not adequately address threats to financial stability. Some proposals—changing the role of regulators in the bankruptcy process, creating funding mechanisms, and limiting the safe- harbor treatment of qualified financial contracts—may address this potential shortcoming. However, experts are not ready to recommend specific changes to the Code and the proposals require further consideration. FSOC—which was established under the Dodd-Frank Act to identify and respond to threats to financial stability—has not specifically considered changes to the role of regulators in bankruptcy or the treatment of QFCs. Although the Dodd-Frank Act does not explicitly require FSOC to assess changes to the Code, it is well positioned to take a broad view of potential changes within the context of other regulatory and market changes prescribed by the act. It is also well positioned to decide the appropriate level of attention such changes merit. Such attention to the systemic implications of financial company bankruptcies could improve FSOC’s ability to take timely and effective action to identify and respond to threats to U.S. financial stability. To fulfill FSOC’s role under the Dodd-Frank Act to identify and respond to threats to financial stability, we recommend that the Secretary of the Treasury, as Chairperson of FSOC, in consultation with other FSOC members, consider the implications for U.S. financial stability of changing the role of regulators and narrowing the safe harbor treatment of qualified financial contracts in financial company bankruptcies. We provided a draft of this report to AOUSC, CFTC, FDIC, the Federal Reserve, NAIC, the Departments of the Treasury and Justice, and SEC, for review and comment. CFTC, FDIC, NAIC, and SEC provided technical comments, which we have incorporated as appropriate. AOUSC, the Federal Reserve, and Department of Justice did not provide comments. Treasury’s Under Secretary for Domestic Finance, on behalf of the Chairperson of FSOC, provided written comments, which are reprinted in appendix IV. In commenting on our draft report, FSOC said that it shares our concern that a disorderly financial company bankruptcy could pose risks to financial stability. However, FSOC stated that it would be premature for FSOC to prioritize the consideration of proposals to amend the Code until the Dodd-Frank Act is fully implemented or there is evidence of risks that cannot be adequately addressed within existing law. FSOC added that the Federal Reserve Board and FDIC are currently implementing provisions of the Dodd-Frank Act requiring designated financial companies to submit resolution plans ("living wills") to facilitate their orderly resolution under the Code. FSOC also noted that it is facilitating communication and coordination on the implementation of OLA and living will requirements. FSOC noted further that the council is engaged in a variety of other actions to address risks to financial stability posed by the failure of one or more financial companies such as the designation of nonbank financial companies. We acknowledge FSOC’s efforts to implement the Dodd-Frank Act and the actions they have taken to address risks to financial stability, including some actions related to implementing OLA. However, rather than considering changes to the Code after the Dodd-Frank Act is fully implemented, our recommendation is intended to encourage FSOC to actively address such changes in conjunction with these efforts— particularly as some suggested changes would affect regulators’ and ultimately FSOC’s ability to respond to the failure of a large complex institution. First, changing the role of regulators in a financial company bankruptcy could be critical for effective resolution. For example, the point at which regulators become aware of an impending or actual financial company bankruptcy could be critical to determining whether its living will could be used to improve the orderliness and effectiveness of the bankruptcy. Similarly, timing could be critical in determining whether to use OLA, which is to be used if a bankruptcy under the Code were determined to have serious adverse effects on U.S. financial stability. Second, narrowing the treatment of QFCs could also have implications for limiting systemic risk. As some members of the council have stated publicly, bankruptcy remains the preferred method for resolving failing financial companies. Given that preference and FSOC’s charge to identify and respond to risks to U.S. financial stability, our recommendation—that FSOC consider the implications for U.S. financial stability of changing the role of regulators and narrowing safe harbor treatment of QFCs in financial company bankruptcies—is consistent with its statutory role and responsibilities. We are sending copies of this report to the appropriate congressional committees, the Director of the Administrative Office of the U.S. Courts, Chairman of the Commodity Futures Trading Commission, Attorney General, Secretary of the Treasury, Chairman of the Federal Deposit Insurance Corporation, Director of the Federal Judicial Center, Chairman of the Board of Governors of the Federal Reserve System, Chief Executive Officer of the National Association of Insurance Commissioners, Chairman of the Securities and Exchange Commission, and other interested parties. The report also is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact Alicia Puente Cackley at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are listed in appendix V. Section 202(e) of the Dodd-Frank Wall Street Reform and Consumer Protection Act (Dodd-Frank Act) mandated that we report on the orderliness and efficiency of financial company bankruptcies every year for 3 years after passage of the act, in the fifth year, and every 5 years thereafter. This report, the third in the series, examines the advantages and disadvantages of certain proposals to modify the Bankruptcy Code (Code) for financial company bankruptcies. Specifically this report examines the advantages and disadvantages of proposals (1) to change the role of financial regulators in the bankruptcy process; (2) affecting the funding of financial company bankruptcies; and (3) to change the safe- harbor treatment of qualified financial contracts (QFC), including derivatives and repurchase agreements. To address all of our objectives, we reviewed relevant laws, including the Code and the Dodd-Frank Act as well as GAO reports that addressed bankruptcy issues and financial institution failures. We specifically reviewed the reports we issued during the first 2 years of the mandate as well as reports written under the same or similar mandates by the Administrative Office of the United States Courts (AOUSC) and the Board of Governors of the Federal Reserve System (Federal Reserve). We also updated our review of published economic and legal research on the effectiveness of bankruptcies that we had originally completed during the first year of the mandate. For the original search we relied on Internet search databases (including EconLit and Proquest) to identify studies published or issued after 2000 up through 2010. We reviewed these articles to further determine the extent to which they were relevant to our engagement, that is, whether they discussed criteria for effectiveness of the bankruptcy process, key features of the bankruptcy process, or proposals for improving the bankruptcy process. We augmented this Internet search with articles provided by those we interviewed or obtained from conferences. In addition, we reviewed a number of prior GAO reports on financial institutions and the financial crisis. For this report, we replicated the literature search for 2011 and 2012. Further we met with officials at the following federal government agencies: AOUSC; the Commodity Futures Trading Commission; Federal Deposit Insurance Corporation; Department of Justice; Department of the Treasury, including officials who support the Financial Stability Oversight Council (FSOC); Federal Reserve; and Securities and Exchange Commission. In addition we met with officials of the National Association of Insurance Commissioners and members of insurance departments in Illinois, Iowa, and Texas. We relied on our earlier work and the updated literature review to establish criteria for orderliness and effectiveness and to develop a list of proposals related to the role of regulators in the bankruptcy process or the role of government in financing bankruptcies, as well as proposals to change the safe-harbor treatment of certain financial contracts. In our earlier work, we analyzed the results of the literature review and expert interviews to determine criteria for orderliness and effectiveness of financial company bankruptcies. These criteria are minimizing systemic risk, avoiding fire sales, maximizing value; preserving due process, and minimizing taxpayer liability. In that work, we also used the literature review to determine the range of proposals that had been made to reform the bankruptcy process for financial institutions. We categorized some of the proposals into groups, such as those that included a role for the regulators or modified the treatment of qualified financial contracts, and then asked the experts looking at these categories and specific proposals to tell us which they considered had merit and should be included for further consideration and why. We also updated the literature review to determine whether earlier proposals had evolved, proposals had been subject to critical review, or additional proposals had been made. As we had for our earlier work, we surveyed relevant government agencies for information on newer studies they had or were conducting or were aware of related to our objectives. To obtain expert views on existing proposals and how these proposals might be improved, we convened two roundtables to discuss the advantages and disadvantages of specific proposals. The roundtables were held at the National Academy of Sciences (NAS) and staff at NAS assisted with determining who would sit on the roundtables. Generally, roundtable members were chosen for their expertise on bankruptcy and financial institutions and markets. We also discussed potential experts for our roundtables with the relevant government agencies listed previously. Specifically, we relied on a list of experts compiled for the first report under this mandate. These experts represented a wide range of interests including academics, industry representatives, judges, and practicing attorneys. The experts had made proposals, written extensively on bankruptcies or financial institutions, or were recommended by relevant government agencies. In addition, relevant government agencies and NAS suggested additional potential participants for our roundtables, whom we considered using our original criteria and the balance of the experts at the roundtables. Final participants for the roundtables were chosen for their expertise and to ensure that a number of interested parties were included. These included academics, industry representatives, judges, practicing attorneys, and regulators. To ensure that participants represented a broad range of views and interests and that we fully understood those interests, we required that participants complete a conflict of interest form. See appendix II for a list of participants in each roundtable, as well as background materials and agendas. Participants at the first roundtable held on April 1, 2013, discussed the role of regulators in the bankruptcy process for financial companies and how those bankruptcies might be financed. The proposals discussed were: 1. Require the debtor to notify and consult with regulators (primary, functional, Financial Stability Oversight Council, foreign, or other) in advance of filing for bankruptcy. 2. Allow regulators (primary, functional, Financial Stability Oversight Council, foreign, or other) to commence an involuntary bankruptcy in the event that the firm is insolvent or in imminent danger of becoming insolvent. 3. Allow regulators (primary, functional, Financial Stability Oversight Council, foreign, or other) of the debtor or its subsidiaries to have standing or a right to be heard in the courts to raise issues relative to regulation. 4. Consider the role of regulators (primary, functional, Financial Stability Oversight Council, foreign, or other) in determining what subsidiaries should be included in a bankruptcy proceeding, the extent to which complex firms might be consolidated in bankruptcy, including the possibility of revoking the exclusion from bankruptcy for insurance companies and the exclusion from Chapter 11 for stock and commodities brokers. 5. Restrict U.S. Treasury and Federal Reserve from participating in bankruptcy financing. 6. Allow the government to provide subordinated debtor-in-possession financing to companies with assets greater than $100 million (subsidiaries included) with a hearing and the court’s approval and oversight. Similarly, participants in the second roundtable, held on April 10, 2013, discussed proposals to change the safe-harbor treatment of certain financial contracts such as derivatives and repurchase agreements. The proposals discussed during this roundtable were: 1. Removing all safe harbors for qualified financial contracts. 2. Partially rolling back safe harbors on specific contracts; such as a. allowing only contracts traded on an exchange to have safe- b. limiting collateral sales of repos by counterparties to cash-like or highly marketable securities; or c. allowing roll backs with approval of the Financial Stability Oversight Council or the courts. 3. Implementing a temporary stay for all or certain contracts. 4. Exercising certain “reach back” avoiding powers for qualified financial contracts. In both cases participants discussed the advantages and disadvantages of the proposals relative to our criteria for orderly and effective bankruptcies. In addition they discussed impediments to implementing proposals and how these impediments could be addressed. The agendas for the roundtables are included in appendix II. To meet our objectives, we also interviewed some experts that were not able or did not choose to participate in the roundtables on their views about the proposals. We used regulatory data to provide context for some expert statements. For expert statements on the growth of large financial institutions since the 2007-2009 financial crisis, we used data from the Federal Reserve and SEC to provide measures of the growth of global systemically important banks from 2007 to 2012. For expert statements about the growth of markets for repurchase agreements and derivatives related to changes in the Code in 2005, we used data from FSOC’s 2013 Annual Report, which is signed by the principals of 9 federal agencies and the independent member with insurance expertise, and the Bank for International Settlements to provide measures of the growth of repurchase agreements and derivatives from 2000 to 2012. We conducted this performance audit from October 2012 to July 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This appendix includes a list of the experts who participated in our roundtables, background information that was provided to the experts prior to the roundtables, and the agendas for the roundtables discussions. Financial derivatives derive their value from an underlying reference item or items, such as equities, debt, exchange rates, and interest rates. Parties involved in financial derivative transactions do not need to own or invest in the underlying reference items, and often do not. These products are agreements that shift risks from one party to another—each commonly referred to as a counterparty. Such shifting of risks may allow companies to offset other risks—hedging—or to take advantage of expectations of obtaining an economic gain due to changes in the value of the underlying reference items—speculation. Although some transactions are bilateral in that they involve only two counterparties, derivatives can be used to structure more complicated arrangements involving multiple transactions and parties. Financial derivatives are sold and traded on regulated exchanges or in private, over-the-counter markets that allow highly customized transactions specific to the needs of the counterparties. A master netting agreement sets out the terms governing contractual actions between counterparties with multiple derivative contracts. This agreement provides for the net settlement of all contracts, as well as cash collateral, through a single payment, in a single currency, in the event of default on or termination of any one contract. Generally counterparties net payments to each other under the contract, and, if a counterparty defaults, the nondefaulting counterparty can immediately close-out open contracts by netting one against the other. It can also receive payment under what is called set off, which is the discharge of reciprocal or mutual obligations to the extent of the smaller obligation. For example, a nondefaulting bank can take funds from a defaulting party’s bank deposit held by the bank as payment for what the bank is owed on a contract it has with the defaulting party as long as the deposit existed prior to the default. Financial derivatives receive special treatment under the Code and thus are sometimes called qualified financial contracts (QFC). The Code includes five categories commonly considered QFCs, which include various types of derivatives. Contracts may fall into more than one category. The Code includes specific definitions of the agreements and transactions covered. In addition, to have protection under the Code, the counterparty with the debtor also must meet specified definitions. The types of derivatives qualifying for special treatment are generally described as follows: Securities contract. Securities contact is a broad term defining a financial agreement between counterparties and may include contracts for the purchase and sale of various financial products such as a group or index of securities, mortgage loans, certificates of deposit, and extensions of credit for settlement purposes. Margin loans are one type of extension of credit through a financial intermediary for the purchase, sale, carrying, or trading of securities. Margin loans do not include other loans secured with securities collateral. Securities contracts also include options to purchase and sell securities, or other financial products. Options give their holders the right, but not the obligation, to buy (call option) or sell (put option) a specified amount of the underlying reference item at a predetermined price (strike price) at or before the end of the contract. Commodities contract. In a commodities contract the commodities buyer agrees to purchase from the commodities seller a fixed quantity of a commodity at a fixed price on a fixed date in the future. Commodities can consist of agricultural goods, metals, and goods used for the production of energy such as crude oil. For example, to hedge against the risk of rising oil prices, oil refineries can enter a commodities contract to fix a price today for a future supply shipment. Forward contract. A “forward contract” is a contract for the purchase, sale, or transfer of a commodity with a maturity date more than 2 days after the contract is entered into. Under the Code, a forward contract can include, but is not limited to, a lease, swap, hedge transaction, deposit, or loan. As an example, a firm may want to limit risk to fluctuations in service costs, such as electricity prices. The firm may enter into a forward contract with an electricity provider to obtain future service at a fixed rate. Swap Agreement. A swap involves an ongoing exchange of one or more assets, liabilities, or payments for a specified period. Swaps include interest rate swaps, commodity-based swaps, and broad-based credit default swaps. Security-based swaps include single-name and narrow-based credit default swaps and equity- based swaps. As an example, interest rate swaps allow one party to exchange a stream of variable-rate interest payments for a stream of fixed-rate interest payments. These products help market participants hedge their risks or stabilize their cash flows. Alternatively, market participants may use these products to benefit from an expected change in interest rates. A credit default swap is generally a contract between two parties where the first party promises to pay the second party if a third party experiences a credit event such as failing to pay a debt. Credit default swaps are contracts that act as a type of insurance, or a way to hedge risks, against default or another type of credit event associated with a security such as a corporate bond. Repurchase agreements are also qualified to receive special treatment under the Code and are thus considered to be a QFC. In a repurchase agreement one party sells a security, or a portfolio of securities, to another party and agrees to repurchase the security or portfolio on a specified future date—often the next day—at a prearranged price. The security, or portfolio of securities, serves as collateral for the transaction. In a reverse repurchase agreement, a security is purchased with the agreement to resell on a specified future date. Repurchase agreements have been used to provide financial institutions with funding for operations. A bilateral repurchase agreement—a repurchase agreement solely between two counterparties—can be viewed as two subtransactions referred to as initiation and completion. A repurchase agreement is similar to a loan secured by collateral. A firm will lend cash to a counterparty at an interest rate in exchange for assets provided by the counterparty as collateral. In a repurchase agreement, a cash provider willing to invest cash will agree to purchase securities from a collateral provider, or repurchase agreement dealer. Repurchase agreement dealers are typically distinguished as the counterparty selling securities, or providing collateral, at initiation. The market value of the securities purchased will typically exceed the value of cash loaned to the dealer. When a repurchase agreement matures, securities are sold back to the collateral provider and cash plus interest are returned to the cash provider. Collateral providers or dealers are generally large financial institutions, such as subsidiaries within bank holding companies. Cash providers are firms such as, but not limited to, other large financial institutions, hedge funds, and money market funds. Under the Code, U.S. Treasury debt securities, agency debt issues, mortgage-backed securities, and other assets can be used as collateral in repurchase agreement transactions. For most of the debtor’s assets, the Code provides an automatic stay, or freeze, when the bankruptcy petition is filed. That is, the filing generally stops lawsuits, foreclosures, and most other collection activities against the debtor allowing the debtor or a trustee time to eliminate or restructure debts. For example, set-off of any debt owed to the debtor that arose before the filing against any claim against the debtor is prohibited. Additionally, in certain situations debtors may not terminate or modify an executory contract at any time after the bankruptcy is filed solely because of a provision in the contract that is conditioned on the insolvency or financial condition of the debtor or the filing of bankruptcy or the appointing of a trustee. However, the QFC’s described previously receive safe-harbor treatment that generally exempts them from the automatic stay. Instead, the contractual rights—to liquidate, accelerate, or terminate—of nondefaulting counterparties conditioned on the insolvency or financial condition of one of the counterparties or the filing of bankruptcy or the appointing of a trustee, such as netting and setoff, are activated. Counterparties with claims against the debtor’s property are typically referred to as creditors. Some contracts that are generally considered QFCs may not be eligible for safe-harbor treatment or may be otherwise limited. For example: Repurchase agreements, where the debtor is a stockbroker or securities clearing agency, and securities contracts that are resolved under the Securities Investor Protection Act of 1970 (SIPA) or any statute administered by Securities and Exchange Commission (SEC). Certain commodity contracts involved in a commodity broker’s liquidation under Chapter 7. For example, a commodity broker creditor may not net or offset an obligation to a commodity broker debtor. Repurchase agreements are treated differently from some other contracts in that any excess of the market prices received on liquidation over the amount of the stated repurchase agreement price and all expenses in connection with the liquidation of the repurchase agreement must be deemed property of the debtor’s estate, subject to the available rights of setoff. For master netting agreements, the right to terminate, liquidate, or accelerate is only applicable to the extent it is permissible for each type of QFC. After entities exercise their rights of netting for individual QFCs and under master netting agreements, some debtors still may be indebted to the creditor. Generally, the creditors’ remaining claims will receive the same treatment accorded other unsecured creditors. The figures below illustrate the safe-harbor exemption from the automatic stay in simplified, yet practical scenarios: Figure 4 illustrates a bilateral contract, in which two counterparties are able to net opposing obligations of a contract, or, stated otherwise, net payments under a single master netting agreement. In this example, under current market conditions of an existing QFC, Firm A owes $100 to Firm B while Firm B owes $120 to Firm A. If Firm B files under the Code, the QFC is not stayed due to the safe harbor and Firm A receives the net proceeds of $20 ahead of Firm B’s other creditors. However, Firm A has no guarantee of recouping the total value from the QFC due to other factors, such as a change in market conditions. Without the safe harbors, Firm A would not have been able to terminate the transaction and could have been exposed to further market risk. Figure 5 depicts the typical completion of a repurchase agreement transaction along with the possibility that the creditor liquidates collateral in certain situations. In the case of a repurchase agreement, if a dealer files under the Code after the initiation but prior to completion, the cash provider at initiation will be left with the collateral provided by the dealer. Under the safe harbor, the cash provider has the option to terminate the transaction with the insolvent dealer. As illustrated in figure 4, the cash provider may terminate the transaction and sell the collateral in the open market to a third party. Without the safe harbor, concerns have been raised that a stay on the overnight repurchase agreement market could result in adverse market impacts due to simultaneous sales of collateral. QFCs are generally also exempt from avoidance or claw back provisions under the Code. These provisions generally require that the trustee may avoid, or take back, any payments made during the 90 days before the filing of a bankruptcy petition if those payments are preferential or 2 years before the filing of the petition if those payments are fraudulent. But, for QFCs, a trustee may not recover certain transfers made by or to a swap participant, repurchase agreement participant, commodity broker, forward contract merchant, stockbroker, financial institution, financial participant, or securities clearing agency in connection with securities contracts, commodity contracts, forward contracts, repurchase agreements, or swaps that were done before the bankruptcy filing. Also, a trustee may not recover transfers made by or to a master netting agreement participant or any individual contract covered by a master netting agreement that was made before the bankruptcy filing. Since many QFCs are short term, and likely to be agreed to well within the 90 day window, these exemptions provide protection to many QFCs including those under master netting agreements. In addition to the contact named above, Debra Johnson (Assistant Director), Nancy S. Barry, Rudy Chatlos, Risto Laboski, Marc Molino, Barbara Roesmann, Jessica Sandler, and Jason Wildhagen made significant contributions to this report. Other assistance was provided by Janet Eackloff and Walter Vance.
The Dodd-Frank Wall Street Reform and Consumer Protection Act (Dodd-Frank Act) mandates that GAO report on an ongoing basis on ways to make the Code more effective in resolving certain failed financial companies. This report examines advantages and disadvantages of certain proposals, based on those identified in GAO's first report, to revise the Code for financial company bankruptcies--specifically, proposals (1) to change the role of financial regulators in the bankruptcy process; (2) affecting funding of financial company bankruptcies; and (3) to change the safe-harbor treatment of QFCs. For this report, GAO held two expert roundtables in which participants evaluated the proposals using criteria for orderly and effective bankruptcies that GAO developed in earlier reports. The criteria are minimizing systemic risk, avoiding asset fire sales, ensuring due process, maximizing value, and limiting taxpayer liability. GAO identified these criteria by reviewing literature and interviewing government officials, industry representatives, and legal and academic experts. Because the Bankruptcy Code (Code) does not specifically address issues of systemic risk, experts have proposed giving financial regulators a greater role in financial company bankruptcies. However, according to experts at a GAO roundtable, such proposals may have limited impact and raise certain implementation issues. For example, a proposal to require notification before bankruptcy depends on when (number of days) notification would be required and with whom (which regulators). Experts noted financial companies may not know that they will declare bankruptcy even a few days before the event and could have many regulators to notify. Experts also noted ways regulators already can compel financial companies to declare bankruptcy, and that changing the Code to allow regulators to place firms in bankruptcy involuntarily could temporarily place a firm in an uncertain legal status, eroding firms' values and endangering market stability. Other options, such as having regulatory standards forcing the firm into bankruptcy, could improve the likelihood of an orderly resolution, according to these experts. Although the proposals reflect the need to minimize systemic effects of financial company bankruptcies, the Financial Stability Oversight Council (FSOC)--charged with responding to threats to financial stability--has not considered changes to the Code. Consideration could improve FSOC's ability to address such threats in a timely and effective manner. Experts emphasized that funding is needed to facilitate orderly and effective financial company bankruptcies. They generally agreed that prohibiting all federal funding or guarantees of private funding likely would lead to fire sales of assets. They agreed that fully secured funding should be used only to provide short-run liquidity and not for bailouts of insolvent firms' creditors. Experts suggested a private-sector fund could be created for this purpose. Such funds could be collected voluntarily, through routine assessments (before a bankruptcy), or through a facility similar to the one created for the Orderly Liquidation Authority, which allows federal funding at the time of a bankruptcy and later recovery of funds through an industry assessment. Experts noted some difficulties associated with these proposals, including determining whether a firm was insolvent or needed liquidity, and identifying permissible types of collateral. Generally, experts did not agree on advantages or disadvantages of proposals to change the safe-harbor treatment of qualified financial contracts (QFC). The Code exempts QFCs, such as derivatives, from the automatic stay that generally prevents creditors from taking company assets in payment of debts before a case is resolved. It also exempts QFCs from provisions that allow bankruptcy judges to "avoid" contracts entered into within specified times before a filing. Proposals to change QFC treatment--subjecting all or some contracts to the automatic stay on a permanent or temporary basis and removing the avoidance exemptions--might address issues raised by extensive contract terminations in the early days of financial company bankruptcies. Experts said it was unclear what lessons should be learned from those experiences. Many noted that narrowing the exemptions would reduce the size of derivative markets, but views varied about whether such narrowing would increase or decrease systemic risk. Some experts said that the current safe harbors decrease systemic risk, while others said they increase it by making firms more dependent on less-reliable short-term financing. FSOC should consider the implications for U.S. financial stability of changing the role of regulators and the treatment of QFCs in financial company bankruptcies. FSOC agreed that a disorderly financial company bankruptcy could pose risks to financial stability, but stated that it would be premature for FSOC to consider proposals to change the Code. GAO reiterated that its recommendation was consistent with FSOC’s statutory role and responsibilities.
The same speed and accessibility that create the enormous benefits of the computer age can, if not properly controlled, allow individuals and organizations to inexpensively eavesdrop on or interfere with computer operations from remote locations for mischievous or malicious purposes, including fraud or sabotage. In recent years, the sophistication and effectiveness of cyberattacks have steadily advanced. These attacks often take advantage of flaws in software code, circumvent signature-based tools that commonly identify and prevent known threats, and use social engineering techniques designed to trick the unsuspecting user into divulging sensitive information or propagating attacks. These attacks are becoming increasingly automated with the use of botnets— compromised computers that can be remotely controlled by attackers to automatically launch attacks. Bots (short for robots) have become a key automation tool used to speed the infection of vulnerable systems. Government officials are increasingly concerned about attacks from individuals and groups with malicious intent, such as crime, terrorism, foreign intelligence-gathering, and acts of war. As greater amounts of money are transferred through computer systems, as more sensitive economic and commercial information is exchanged electronically, and as the nation’s defense and intelligence communities increasingly rely on commercially available information technology, the likelihood increases that information attacks will threaten vital national interests. Recent attacks and threats have further underscored the need to bolster the cybersecurity of our government’s and our nation’s computer systems and, more importantly, of the critical operations and infrastructures they support. Recent examples of attacks include the following: In March 2005, security consultants within the electric industry reported that hackers were targeting the U.S. electric power grid and had gained access to U.S. utilities’ electronic control systems. Computer security specialists reported that, in a few cases, these intrusions had “caused an impact.” While officials stated that hackers had not caused serious damage to the systems that feed the nation’s power grid, the constant threat of intrusion has heightened concerns that electric companies may not have adequately fortified their defenses against a potential catastrophic strike. In January 2005, a major university reported that a hacker had broken into a database containing 32,000 student and employee Social Security numbers, potentially compromising their identities and finances. In similar incidents during 2003 and 2004, it was reported that hackers had attacked the systems of other universities, exposing the personal information of over 1.8 million people. In June 2003, the U.S. government issued a warning concerning a virus that specifically targeted financial institutions. Experts said the BugBear.b virus was programmed to determine whether a victim had used an e-mail address for any of the roughly 1,300 financial institutions listed in the virus’s code. If a match was found, the software attempted to collect and document user input by logging keystrokes and then provided this information to a hacker, who could use it in attempts to break into the banks’ networks. In November 2002, a British computer administrator was indicted on charges that he accessed and damaged 98 computers in 14 states between March 2001 and March 2002, causing some $900,000 in damage. These networks belonged to the Department of Defense, the National Aeronautics and Space Administration, and private companies. The indictment alleges that the attacker was able to gain administrative privileges on military computers, copy password files, and delete critical system files. The attacks rendered the networks of the Earle Naval Weapons Station in New Jersey and the Military District of Washington inoperable. In May 2005, we reported that federal agencies are facing a set of emerging cybersecurity threats that are the result of increasingly sophisticated methods of attack and the blending of once distinct types of attack into more complex and damaging forms. Examples of these threats include spam (unsolicited commercial e-mail), phishing (fraudulent messages used to obtain personal or sensitive data), and spyware (software that monitors user activity without the user’s knowledge or consent). Spam consumes significant resources and is used as a delivery mechanism for other types of cyberattacks; phishing can lead to identity theft, loss of sensitive information, and reduced trust and use of electronic government services; and spyware can capture and release sensitive data, make unauthorized changes, and decrease system performance. Federal law and policies call for critical infrastructure protection (CIP) activities that are intended to enhance the cyber and physical security of both the public and private infrastructures that are essential to national security, national economic security, and national public health and safety. Federal policy designates certain federal agencies as lead federal points of contact for the critical infrastructure sectors and assigns them responsibility for infrastructure protection activities in their assigned sectors and for coordination with other relevant federal agencies, state and local governments, and the private sector to carry out related responsibilities (see app. 1). In addition, federal policy establishes the Department of Homeland Security (DHS) as the focal point for the security of cyberspace—including analysis, warning, information sharing, vulnerability reduction, mitigation, and recovery efforts for public and private critical infrastructure information systems. To accomplish this mission, DHS is to work with other federal agencies, state and local governments, and the private sector. Among the many CIP responsibilities established for DHS and identified in federal law and policy are 13 key cybersecurity-related responsibilities. These include general CIP responsibilities that have a cyber element (such as developing national plans, building partnerships, and improving information sharing) as well as responsibilities that relate to the five priorities established by the National Strategy to Secure Cyberspace. The five priorities are (1) developing and enhancing national cyber analysis and warning, (2) reducing cyberspace threats and vulnerabilities, (3) promoting awareness of and training in security issues, (4) securing governments’ cyberspace, and (5) strengthening national security and international cyberspace security cooperation. Table 1 provides a description of each of these responsibilities. In June 2003, DHS established the National Cyber Security Division (NCSD), under its Information Analysis and Infrastructure Protection Directorate, to serve as a national focal point for addressing cybersecurity issues and to coordinate implementation of the cybersecurity strategy. NCSD also serves as the government lead on a public/private partnership supporting the U.S. Computer Emergency Response Team (US-CERT) and as the lead for federal government incident response. NCSD is headed by the Office of the Director and includes a cybersecurity partnership program as well as four branches: US-CERT Operations, Law Enforcement and Intelligence, Outreach and Awareness, and Strategic Initiatives. DHS has initiated efforts that begin to address each of its 13 key responsibilities for cybersecurity; however, the extent of progress varies among these responsibilities, and more work remains to be done on each. For example, DHS (1) has recently issued an interim plan for infrastructure protection that includes cybersecurity plans, (2) is supporting a national cyber analysis and warning capability through its role in US-CERT, and (3) has established forums to build greater trust and to encourage information sharing among federal officials with information security responsibilities and among various law enforcement entities. However, DHS has not yet developed a national cyber threat assessment and sector vulnerability assessments—or the identification of cross-sector interdependencies—that are called for in the cyberspace strategy. The importance of such assessments is illustrated in our recent reports on vulnerabilities in infrastructure control systems and in wireless networks. Further, the department has not yet developed and exercised government and government/industry contingency recovery plans for cybersecurity, including a plan for recovering key Internet functions. The department also continues to have difficulties in developing partnerships, as called for in federal policy, with other federal agencies, state and local governments, and the private sector. Without such partnerships, it is difficult to develop the trusted, two-way information sharing that is essential to improving homeland security. Table 2 provides an overview of the steps that DHS has taken related to each of its 13 key responsibilities and identifies the steps that remain. DHS faces a number of challenges that have impeded its ability to fulfill its cyber CIP responsibilities. Key challenges include achieving organizational stability, gaining organizational authority, overcoming hiring and contracting issues, increasing awareness about cybersecurity roles and capabilities, establishing effective partnerships with stakeholders (other federal, state, and local governments and the private sector), achieving two-way information sharing with these stakeholders, and providing and demonstrating the value DHS can provide. Organizational stability: Over the last year, multiple senior DHS cybersecurity officials—including the NCSD Director, the Deputy Director responsible for Outreach and Awareness, and the Director of the US-CERT Control Systems Security Center, the Under Secretary for the Information Analysis and Infrastructure Protection Directorate and the Assistant Secretary responsible for the Information Protection Office—have left the department. Infrastructure sector officials stated that the lack of stable leadership has diminished NCSD’s ability to maintain trusted relationships with its infrastructure partners and has hindered its ability to adequately plan and execute activities. According to one private-sector representative, the importance of organizational stability in fostering strong partnerships cannot be over emphasized. Organizational authority: NCSD does not have the organizational authority it needs to effectively serve as a national focal point for cybersecurity. Accordingly, its officials lack the authority to represent and commit DHS to efforts with the private sector. Infrastructure and cybersecurity officials, including the chairman of the sector coordinators and representatives of the cybersecurity industry, have expressed concern that the cybersecurity division’s relatively low position within the DHS organization hinders its ability to accomplish cybersecurity-related goals. NCSD’s lack of authority has led to some missteps, including DHS’s cancellation of an important cyber event without explanation and its taking almost a year to issue formal responses to private sector recommendations that resulted from selected National Cyber Security Summit task forces—even though responses were drafted within months. A congressional subcommittee also expressed concern that DHS’s cybersecurity office lacks the authority to effectively fulfill its role. In 2004 and again in 2005, the subcommittee proposed legislation to elevate the head of the cybersecurity office to an assistant secretary position. Among other benefits, the subcommittee reported that such a change could provide more focus and authority for DHS’s cybersecurity mission, allow higher level input into national policy decisions, and provide a single visible point of contact within the federal government for improving interactions with the private sector. To try to address these concerns, DHS recently announced that it would elevate responsibility for cybersecurity to an assistant secretary position. Hiring and contracting: Ineffective DHS management processes have impeded the department’s ability to hire employees and maintain contracts. We recently reported that since DHS’s inception, its leadership has provided a foundation for maintaining critical operations while it undergoes transformation. However, in managing its transformation, we noted that the department still needed to overcome a number of significant challenges, including addressing systemic problems in human capital and acquisition systems. Federal and nonfederal officials expressed concerns about its hiring and contracting processes. For example, an NCSD official reported that the division has had difficulty in hiring personnel to fill vacant positions. These officials stated that once they found qualified candidates, some candidates decided not to apply and another one withdrew his acceptance because he felt that DHS’s hiring process had taken too long. In addition, a cybersecurity division official stated that there had been times when DHS did not renew NCSD contracts in a timely manner, requiring that key contractors work without pay until approvals could be completed and payments could be made. In other cases, NCSD was denied services from a vendor because the department had repeatedly failed to pay this vendor for its services. External stakeholders, including an ISAC representative, also noted that NCSD is hampered by how long it takes DHS to award a contract. Awareness of DHS roles and capabilities: Many infrastructure stakeholders are not yet aware of DHS’s cybersecurity roles and capabilities. Department of Energy critical infrastructure officials stated that the roles and responsibilities of DHS and the sector- specific agencies need to be better clarified in order to improve coordination. In addition, during a regional cyber exercise, private- sector and state and local government officials reported that the mission of NCSD and the capabilities that DHS could provide during a serious cyber-threat were not clear to them. NCSD’s manager of cyber analysis and warning operations acknowledged that the organization has not done an adequate job reaching out to the private sector regarding the department’s role and capabilities. Effective partnerships: NCSD is responsible for leveraging the assets of key stakeholders, including other federal, state, and local governments and the private sector, in order to facilitate effective protection of cyber assets. The ability to develop partnerships greatly enhances the agency’s ability to identify, assess, and reduce cyber threats and vulnerabilities, establish strategic analytical capabilities, provide incident response, enhance government cybersecurity, and improve international efforts. According to one infrastructure sector representative, effective partnerships require building relationships with mutually developed goals; shared benefits and responsibilities; and tangible, measurable results. However, this individual reported that DHS has not typically adopted these principles in pursuing partnerships with the private sector, which dramatically diminishes cybersecurity gains that government and industry could otherwise achieve. For example, it has often informed the infrastructure sectors about government initiatives or sought input after most key decisions have been made. Also, the department has not demonstrated that it recognizes the value of leveraging existing private sector mechanisms, such as information-sharing entities and processes that are already in place and working. In addition, the instability of NCSD’s leadership positions to date has led to problems in developing partnerships. Representatives from two ISACs reported that turnover at the cybersecurity division has hindered partnership efforts. Additionally, IT sector representatives stated that NCSD needs continuity of leadership, regular communications, and trusted policies and procedures in order to build the partnerships that will allow the private sector to share information. Information sharing: We recently identified information sharing in support of homeland security as a high-risk area, and we noted that establishing an effective two-way exchange of information to help detect, prevent, and mitigate potential terrorist attacks requires an extraordinary level of cooperation and perseverance among federal, state, and local governments and the private sector. However, such effective communications are not yet in place in support of our nation’s cybersecurity. Representatives from critical infrastructure sectors stated that entities within their respective sectors still do not openly share cybersecurity information with DHS. As we have reported in the past, much of the concern is that the potential release of sensitive information could increase the threat to an entity. In addition, sector representatives stated that when information is shared, it is not clear whether the information will be shared with other entities—such as other federal entities, state and local entities, law enforcement, or various regulators—and how it will be used or protected from disclosure. Representatives from the banking and finance sector stated that the protection provided by the Critical Infrastructure Information Act and the subsequently established Protected Critical Infrastructure Information Program is not clear and has not overcome the trust barrier. Sector representatives have expressed concerns that DHS is not effectively communicating information to them. According to one infrastructure representative, DHS has not matched private sector efforts to share valuable information with a corresponding level of trusted information sharing. An official from the water sector noted that when representatives called DHS to inquire about a potential terrorist threat, they were told that DHS could not share any information and that they should “watch the news.” Providing value: According to sector representatives, even when organizations within their sectors have shared information with NCSD, the entities do not consistently receive useful information in return. They noted that without a clear benefit, they are unlikely to pursue further information sharing with DHS. Federal officials also noted problems in identifying the value that DHS provides. According to Department of Energy officials, the department does not always provide analysis or reports based on the information that agencies provide. Federal and nonfederal officials also stated that most of US-CERT’s alerts have not been useful because they lack essential details or are based on already available information. Further, Treasury officials stated that US-CERT needed to provide relevant and timely feedback regarding the incidents that are reported to it. Clearly, these challenges are not mutually exclusive. That is, addressing challenges in organizational stability and authority will help NCSD build the credibility it needs in order to establish effective partnerships and achieve two-way information sharing. Similarly, effective partnerships and ongoing information sharing with its stakeholders will allow DHS to better demonstrate the value it can add. DHS has identified steps in its strategic plan for cybersecurity that can begin to address these challenges. Specifically, it has established goals and plans for improving human capital management that should help stabilize the organization. Further, the department has developed plans for communicating with stakeholders that are intended to increase awareness of its roles and capabilities and to encourage information sharing. Also, it has established plans for developing effective partnerships and improving analytical and watch and warning capabilities that could help build partnerships and begin to demonstrate added value. However, until it begins to address these underlying challenges, DHS cannot achieve significant results in coordinating cybersecurity activities, and our nation will lack the effective focal point it needs to better ensure the security of cyberspace for public and private critical infrastructure systems. Over the last several years, we have made a series of recommendations to enhance the cybersecurity of critical infrastructures, focusing on the need to (1) develop a strategic analysis and warning capability for identifying potential cyberattacks, (2) protect infrastructure control systems, (3) enhance public/private information sharing, and (4) conduct important threat and vulnerability assessments and address other challenges to effective cybersecurity. These recommendations are summarized below. Strategic Analysis and Warnings: In 2001, we reported on the analysis and warnings efforts within DHS’s predecessor, the National Infrastructure Protection Center, and identified several challenges that were impeding the development of an effective strategic analysis and warning capability. We reported that a generally accepted methodology for analyzing strategic cyber-based threats did not exist. Specifically, there was no standard terminology, no standard set of factors to consider, and no established thresholds for determining the sophistication of attack techniques. We also reported that the Center did not have the industry-specific data on factors such as critical systems components, known vulnerabilities, and interdependencies. We therefore recommended that the responsible executive-branch officials and agencies establish a capability for strategic analysis of computer-based threats, including developing a methodology, acquiring expertise, and obtaining infrastructure data. However, officials have taken little action to establish this capability, and therefore our recommendations remain open today. Control Systems: In March 2004, we reported that several factors—including the adoption of standardized technologies with known vulnerabilities and the increased connectivity of control systems to other systems—contributed to an escalation of the risk of cyber-attacks against control systems. We recommended that DHS develop and implement a strategy for coordinating with the private sector and with other government agencies to improve control system security, including an approach for coordinating the various ongoing efforts to secure control systems. DHS concurred with our recommendation and, in December 2004, issued a high- level national strategy for control systems security. This strategy includes, among other things, goals to create a capability to respond to attacks on control systems and to mitigate vulnerabilities, bridge industry and government efforts, and develop control systems security awareness. However, the strategy does not yet include underlying details and milestones for completing activities. Information Sharing: In July 2004, we recommended actions to improve the effectiveness of DHS’s information-sharing efforts. We recommended that officials within the Information Analysis and Infrastructure Protection Directorate (1) proceed with and establish milestones for developing an information-sharing plan and (2) develop appropriate DHS policies and procedures for interacting with ISACs, sector coordinators (groups or individuals designated to represent their respective infrastructure sectors’ CIP activities), and sector-specific agencies and for coordination and information sharing within the Information Analysis and Infrastructure Protection Directorate and other DHS components. These recommendations remain open today. Moreover, we recently designated establishing appropriate and effective information- sharing mechanisms to improve homeland security as a new high- risk area. We reported that the ability to share security-related information can unify the efforts of federal, state, and local government agencies and the private sector in preventing or minimizing terrorist attacks. Threat and Vulnerability Assessments and Other Challenges: Most recently, in May 2005, we reported that while DHS has made progress in planning and coordinating efforts to enhance cybersecurity, much more work remains to be done to fulfill its basic responsibilities—including conducting important threat and vulnerability assessments and recovery plans. Further, we reported that DHS faces key challenges in building its credibility as a stable, authoritative, and capable organization and in leveraging private/public assets and information in order to clearly demonstrate the value it can provide. We made recommendations to strengthen the department’s ability to implement key cybersecurity responsibilities by prioritizing and completing critical activities and resolving underlying challenges. We recently met with DHS’s acting director for cybersecurity who told us that DHS agreed with our findings and has initiated plans to address our recommendations. He acknowledged that DHS has not adequately leveraged their public and private stakeholders in a prioritized manner and it plans to begin its prioritized approach by focusing stakeholders on information sharing, preparedness, and recovery. He also added that NCSD is attempting to prioritize its major activities consistent with the secretary’s vision of risk management and the National Infrastructure Protection Plan approach. In summary, as our nation has become increasingly dependent on timely, reliable information, it has also become increasingly vulnerable to attacks on the information infrastructure that supports the nation’s critical infrastructures (including the energy, banking and finance, transportation, telecommunications, and drinking water infrastructures). Federal law and policy acknowledge this by establishing DHS as the focal point for coordinating cybersecurity plans and initiatives with other federal agencies, state and local governments, and private industry. DHS has made progress in planning and coordinating efforts to enhance cybersecurity, but much more work remains to be done for the department to fulfill its basic responsibilities—including conducting important threat and vulnerability assessments and recovery plans. As DHS strives to fulfill its mission, it faces key challenges in building its credibility as a stable, authoritative, and capable organization and in leveraging private and public assets and information in order to clearly demonstrate the value it can provide. Until it overcomes the many challenges it faces and completes critical activities, DHS cannot effectively function as the cybersecurity focal point intended by law and national policy. As such, there is increased risk that large portions of our national infrastructure are either unaware of key areas of cybersecurity risks or unprepared to effectively address cyber emergencies. Over the last several years, we have made a series of recommendations to enhance the cybersecurity of critical infrastructures. These include (1) developing a strategic analysis and warning capability for identifying potential cyberattacks, (2) protecting infrastructure control systems, (3) enhancing public/private information sharing, and (4) conducting important threat and vulnerability assessments and address other challenges to effective cybersecurity. Effectively implementing these recommendations could greatly improve our nation’s cybersecurity posture. Mr. Chairman, this concludes my statement. I would be happy to answer any questions that you or members of the subcommittee may have at this time. If you have any questions on matters discussed in this testimony, please contact me at (202) 512-9286 or by e-mail at [email protected]. Other key contributors to this report include Joanne Fiorino, Michael Gilmore, Barbarol James, Colleen Phillips, and Nik Rapelje. Provides for the fundamental need for food. The infrastructure includes supply chains for feed and crop production. Provides the financial infrastructure of the nation. This sector consists of commercial banks, insurance companies, mutual funds, government- sponsored enterprises, pension funds, and other financial institutions that carry out transactions, including clearing and settlement. Transforms natural raw materials into commonly used products benefiting society’s health, safety, and productivity. The chemical industry produces more than 70,000 products that are essential to automobiles, pharmaceuticals, food supply, electronics, water treatment, health, construction, and other necessities. Includes prominent commercial centers, office buildings, sports stadiums, theme parks, and other sites where large numbers of people congregate to pursue business activities, conduct personal commercial transactions, or enjoy recreational pastimes. Comprises approximately 80,000 dam facilities, including larger and nationally symbolic dams that are major components of other critical infrastructures that provide electricity and water. Supplies the military with the means to protect the nation by producing weapons, aircraft, and ships and providing essential services, including information technology and supply and maintenance. Sanitizes the water supply through about 170,000 public water systems. These systems depend on reservoirs, dams, wells, treatment facilities, pumping stations, and transmission lines. Saves lives and property from accidents and disaster. This sector includes fire, rescue, emergency medical services, and law enforcement organizations. Provides the electric power used by all sectors, including critical infrastructures, and the refining, storage, and distribution of oil and gas. The sector is divided into electricity and oil and natural gas. Carries out the post-harvesting of the food supply, including processing and retail sales. Ensures national security and freedom and administers key public functions. Includes the buildings owned and leased by the federal government for use by federal entities. Provides communications and processes to meet the needs of businesses and government. Includes key assets that are symbolically equated with traditional American values and institutions or U.S. political and economic power. Includes 104 commercial nuclear reactors; research and test nuclear reactors; nuclear materials; and the transportation, storage, and disposal of nuclear materials and waste. Delivers private and commercial letters, packages, and bulk assets. The U.S. Postal Service and other carriers provide the services of this sector. Mitigates the risk of disasters and attacks and also provides recovery assistance if an attack occurs. The sector consists of health departments, clinics, and hospitals. Enables movement of people and of assets that are vital to our economy, mobility, and security via aviation, ships, rail, pipelines, highways, trucks, buses, and mass transit. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Increasing computer interconnectivity has revolutionized the way that our government, our nation, and much of the world communicate and conduct business. While the benefits have been enormous, this widespread interconnectivity also poses significant risks to our nation's computer systems and, more importantly, to the critical operations and infrastructures they support. The Homeland Security Act of 2002 and federal policy established the Department of Homeland Security (DHS) as the focal point for coordinating activities to protect the computer systems that support our nation's critical infrastructures. GAO was asked to summarize previous work, focusing on (1) DHS's responsibilities for cybersecurity-related critical infrastructure protection (CIP), (2) the status of the department's efforts to fulfill these responsibilities, (3) the challenges it faces in fulfilling its cybersecurity responsibilities, and (4) recommendations GAO has made to improve cybersecurity of our nation's critical infrastructure. As the focal point for CIP, the Department of Homeland Security (DHS) has many cybersecurity-related roles and responsibilities that GAO identified in law and policy. DHS established the National Cyber Security Division to take the lead in addressing the cybersecurity of critical infrastructures. While DHS has initiated multiple efforts to fulfill its responsibilities, it has not fully addressed any of the 13 responsibilities, and much work remains ahead. For example, the department established the United States Computer Emergency Readiness Team as a public/private partnership to make cybersecurity a coordinated national effort, and it established forums to build greater trust and information sharing among federal officials with information security responsibilities and law enforcement entities. However, DHS has not yet developed national cyber threat and vulnerability assessments or government/industry contingency recovery plans for cybersecurity, including a plan for recovering key Internet functions. DHS faces a number of challenges that have impeded its ability to fulfill its cybersecurity-related CIP responsibilities. These key challenges include achieving organizational stability, increasing awareness about cybersecurity roles and capabilities, establishing effective partnerships with stakeholders, and achieving two-way information sharing with these stakeholders. In its strategic plan for cybersecurity, DHS identifies steps that can begin to address the challenges. However, until it confronts and resolves these underlying challenges and implements its plans, DHS will have difficulty achieving significant results in strengthening the cybersecurity of our critical infrastructures. In recent years, GAO has made a series of recommendations to enhance the cybersecurity of critical infrastructures that if effectively implemented could greatly improve our nation's cybersecurity posture.
Both the Clean Water and Drinking Water SRF programs authorize EPA to provide states and local communities with independent and sustainable sources of financial assistance. This assistance is typically in the form of low- or no-interest loans, for projects that protect or improve water quality and that are needed to comply with federal drinking water regulations and protect public health. Repayment of these loans replenishes the funds and provides the ability to fund future loans for additional projects. The Clean Water SRF program was established in 1987 under the Clean Water Act, which was enacted to protect surface waters, such as rivers, lakes, and coastal areas, and to maintain and restore the physical, chemical, and biological integrity of these waters. The Drinking Water SRF program was established in 1996 under the Safe Drinking Water Act, which was enacted to establish national enforceable standards for drinking water quality and to guarantee that water suppliers monitor water to ensure compliance with standards. The Recovery Act provided $6 billion for EPA’s Clean Water and Drinking Water SRF programs. This amount represents a significant increase over the federal funds awarded to the non-Recovery Act, or base, SRF programs in recent years. From fiscal years 2000 through 2009, annual appropriations averaged about $1.1 billion for the Clean Water SRF program and about $833 million for the Drinking Water SRF program. In addition to increasing funds, the Recovery Act included some new requirements for the SRF programs. First, projects funded with Recovery Act SRF program funds had to be under contract—ready to proceed— within 1 year of the act’s passage, or by February 17, 2010. Second, states had to use at least 20 percent of these funds as a “green reserve” to provide assistance for green infrastructure projects, water- or energy- efficiency improvements, or other environmentally innovative activities. Third, states had to use at least 50 percent of Recovery Act funds to provide “additional subsidies” for projects in the form of principal forgiveness, grants, or negative interest loans. Uses for these additional subsidies can include helping economically disadvantaged communities build water projects, although these uses are not a requirement of the act. With some variation, Congress incorporated two of these requirements— green projects and additional subsidies—into the fiscal year 2010 and 2011 base SRF program appropriations. In addition to meeting requirements from program-specific provisions, water projects receiving Recovery Act funds have to meet requirements from the act’s Buy American and Davis-Bacon provisions. The Recovery Act generally requires that all of the iron, steel, and manufactured goods used in a project be produced in the United States, subject to certain exceptions. Federal agencies can issue waivers for certain projects under specified conditions, for example, if using American-made goods is inconsistent with the public interest or if the cost of goods is unreasonable; the act limits the “unreasonable cost” exception to those instances when inclusion of American-made iron, steel, or other manufactured goods will increase the overall project cost by more than 25 percent. Furthermore, recipients do not need to use American-made goods if they are not sufficiently available or not of satisfactory quality. In addition, the Recovery Act applies Davis-Bacon provisions to all Recovery Act-funded projects, requiring contractors and subcontractors to pay all laborers and mechanics at least the prevailing wage rates in the local area where they are employed, as determined by the Secretary of Labor. Contractors are required to pay these workers weekly and submit weekly certified payroll records. To enhance transparency and accountability over Recovery Act funds, Congress and the administration built numerous provisions into the act, including a requirement that recipients of Recovery Act funding— including state and local governments, private companies, educational institutions, nonprofits, and other private organizations—report quarterly on a number of measures. (Recipients, in turn, may award Recovery Act funds to subrecipients, which are nonfederal entities.) These reports are referred to as “recipient reports,” which the recipients provide through one Web site, www.federalreporting.gov (Federalreporting.gov) for final publication through a second Web site, www.recovery.gov (Recovery.gov). Recipient reporting is overseen by the responsible federal agencies, such as EPA, in accordance with Recovery Act guidance provided by the Office of Management and Budget (OMB). Under this guidance, the federal agencies are required to conduct data quality checks of recipient data, and recipients can correct the data, before they are made available on Recovery.gov. Furthermore, additional corrections can be made during a continuous correction cycle after the data are released on Recovery.gov. A significant aspect of accountability for Recovery Act funds is oversight of spending. According to the federal standards of internal control, oversight should provide managers with current information on expenditures to detect problems and proactively manage risks associated with unusual spending patterns. In guidance issued in February 2009, OMB required each federal agency to develop a plan detailing the specific activities—including monitoring activities—that it would undertake to manage Recovery Act funds. EPA issued its first version of this plan in May 2009, as required, and updated this document as OMB issued new guidance. Nationwide, the 50 states have awarded and obligated the almost $6 billion in Clean Water and Drinking Water SRF program funds provided under the Recovery Act and reported using the majority of these funds for sewage treatment infrastructure and drinking water treatment and distribution systems, according to EPA data. In the nine states we reviewed, the states used these funds to pay for infrastructure projects that help to address major water quality problems, although state officials said that in some cases, Recovery Act requirements changed their priorities or the projects selected for funding. The nine states also used their Recovery Act funding to help economically disadvantaged communities, but state officials indicated that they continue to have difficulty helping these communities. As of March 30, 2011, states had awarded funds for contracts and obligated the $4 billion in Clean Water SRF program funds and $2 billion in Drinking Water SRF program funds provided under the Recovery Act. As we reported in May 2010, EPA indicated that all 50 states met the Recovery Act requirement to award Recovery Act funds to projects under contract by February 17, 2010, 1 year after the enactment of the Recovery Act. In the 2 years since the Recovery Act was passed, states have drawn down from the Treasury approximately 79 percent, or $3.1 billion, of the Clean Water SRF program funds and approximately 83 percent, or $1.7 billion, of the Drinking Water SRF program funds. Across the nation, the states have used the almost $6 billion in Recovery Act Clean and Drinking Water SRF program funds to support more than 3,000 water quality infrastructure projects. As shown in figure 1, the states used the majority of their Recovery Act Clean Water SRF program funds to improve secondary and advanced treatment at wastewater treatment plants, as well as projects to prevent or mitigate sanitary sewer overflow. In Montevallo, Alabama, for example, the state provided Clean Water SRF program funds to upgrade an outdated wastewater treatment plant in Shelby County that served a population of about 5,000. The upgrade added two large settlement basins to hold and treat wastewater, replacing a series of small basins (see fig. 2). The additional treatment is expected to remove nutrients, such as nitrogen and phosphorus, to help the county meet higher standards in the nearby waterways receiving the plant’s discharged water. As shown in figure 3, the states used about half of their Recovery Act Drinking Water SRF program funds to construct projects to transmit and distribute drinking water, including pumps and pipelines to deliver water to customers. States used about 40 percent of their funds for projects to treat and store drinking water. In Baltimore, Maryland, for example, the state provided funds to the city to cover one of its treated water reservoirs at the Montebello drinking water treatment plant. Before it was covered, the reservoir was open to birds and other sources of contamination, and city water managers used a mesh-like material to try to keep birds from landing on or using the water. When the project is complete, the reservoir will be a huge, cement tank buried under soil and vegetation (see fig. 4 for the project under construction in December 2010). According to EPA data, all states met the requirement to use at least 20 percent of their Recovery Act funding for green projects, with $1.1 billion of total Clean Water SRF program funds going to green projects and $544 million of total Drinking Water SRF program funds going to green projects. According to EPA, the goal of supporting green projects is to promote green infrastructure, energy or water efficiency, and innovative ways to sustainably manage water resources. Green infrastructure refers to a variety of technologies or practices—such as green roofs, porous pavement, and rain gardens—that use or mimic natural systems to enhance overall environmental quality. In addition to retaining rainfall and snowmelt and allowing them to seep into groundwater, these technologies can mitigate urban heat islands, and sequester carbon. Figure 5 shows the amount of Clean Water and Drinking Water SRF program funds that states awarded to green projects by type of project. In Annapolis, Maryland, for example, city officials used Clean Water SRF program funds to construct a green parking lot, a project that helped retain and filter storm water runoff. (See fig. 6.) In Los Alamos, New Mexico, city officials used Clean Water SRF program funds to install facilities to recycle water at the city’s wastewater treatment plant; the recycled water will be used as washwater—water that is used in the plant to clean equipment (see fig. 7). Because New Mexico is an arid state, the reuse of water saves operating costs for the plant, as well as scarce water resources. Nationwide, the states also met the Recovery Act requirement to provide at least 50 percent of the Clean Water and Drinking Water SRF program funds as additional subsidies in the form of principal forgiveness, negative interest loans, or grants (i.e., not loans to be fully repaid). Of the total Recovery Act funds awarded, 76 percent of Clean Water SRF Recovery Act funds and 70 percent of Drinking Water SRF Recovery Act funds were distributed as additional subsidies. Figure 8 shows the total Clean Water and Drinking Water Recovery Act funds awarded by the states as principal forgiveness, negative interest loans, or grants. The remaining 24 percent of Clean Water SRF Recovery Act funds and 30 percent of Drinking Water SRF Recovery Act funds will be provided as low- or no-interest loans that will recycle back into the programs as subrecipients repay their loans. In the nine states we reviewed, Recovery Act Clean and Drinking Water SRF program funds have been used to address some of the major clean and drinking water problems in the states. These nine states received a total of about $832 million in Recovery Act SRF program funds—about $579 million for their Clean Water SRF programs and about $253 million for their Drinking Water SRF programs. In total, these funds supported 419 clean and drinking water projects. To award SRF program funds, each of the nine states used a system to score and rank water projects seeking funds to address water quality problems that were submitted by local municipalities or utilities. The projects with the most points are considered the highest priority on the list of projects for funding. For example, Nevada officials told us that groundwater contamination is their state’s major clean water quality problem, which their ranking system addresses by designating the elimination of existing contamination of groundwater as one of the state’s highest-scoring priorities. In addition, in most of the nine states we reviewed, compliance is a key aspect of their ranking system, allowing points to be awarded to infrastructure projects that help the states eliminate causes of noncompliance with federal or state water quality standards and permits. Officials in most of the nine states said that they generally obtain information on their water systems’ compliance with federal and state water quality standards through discussions with their program compliance staff and from state databases. Michigan, for example, assigns a significant amount of points to clean water projects— such as sewage treatment works—that will help these projects comply with enforcement actions brought by the state against a municipality. In the nine states we reviewed, officials said that Recovery Act priorities— including the requirements for projects to be ready to proceed to contract 1 year after the passage of the Recovery Act or for green projects—either changed their priorities for ranking and funding projects or changed the projects they funded. Readiness of a project to proceed to construction requirement. In the nine states, officials included readiness to proceed and other Recovery Act requirements in their ranking system and selected projects on the basis of that ranking system or said that they did not fund—or bypassed—top- ranked projects that were not ready to proceed to construction by February 17, 2010, 1 year after the passage of the Recovery Act. For example, Washington State’s two top-ranked clean water projects did not receive Recovery Act SRF program funds because they could not meet the February 2010 deadline. The projects were to decommission septic systems and construct a wastewater treatment plant to reduce phosphorus discharges to the Spokane River. In Wyoming, many of the projects that were not ready to proceed were water treatment plants, which state officials said take longer to design and plan for construction. Although these higher-ranked projects did not receive Recovery Act funds, at least two states were able to fund these projects in other ways, such as through state grants or non-Recovery Act SRF program funds. Green project requirement. Three states listed green projects separately from other projects. For example, Washington State officials who manage the Clean Water SRF program told us that they established a green projects category because they had anticipated that projects focused primarily on energy and water efficiency (green projects) would not score well under their ranking system, which focuses on water quality protection and improvements. Other states funded green projects ahead of higher- ranked projects. For example, Nevada did not fund a number of higher- ranked projects and funded a lower-ranked drinking water project that had green components. Similarly, Maryland bypassed many projects to fund the first green-ranked project on its list. Buy American and Davis-Bacon provisions. State officials identified a few projects that did not proceed because potential subrecipients either did not want to meet one or more Recovery Act requirements, such as the Buy American and Davis-Bacon provisions, or did not want to increase the cost of their projects. For example, local officials in Alabama withdrew their application for a drinking water project because the project was already contracted without Buy American and Davis-Bacon wage requirements, and an addendum to the contract to meet the regulations would have increased the project’s cost. Similarly, officials in all nine states said that a few communities indicated they preferred to have their projects funded from the base program, or chose not to apply for or withdrew from the Recovery Act funding process to avoid paperwork or the additional costs associated with the act’s Buy American or Davis- Bacon requirements. For example, Wyoming officials said that potential subrecipients for three clean water projects refused funding, citing time constraints or difficulty meeting Buy American requirements. Despite changes in priorities for ranking and funding projects or in the projects funded, officials reported that they were able to fund projects with Recovery Act funds that helped resolve their major water problems. For example, Wyoming officials told us that Recovery Act clean and drinking water funds were used to replace aging sewer and water lines, which they said was one of their major problems. Connecticut officials said that Recovery Act funding helped support four combined sewer overflow projects, which resulted in fewer discharges of partially treated sewage into the area waterways. Nevada officials told us that Recovery Act funding will help with the rehabilitation and relining of sewer ponds in four rural communities, eliminating groundwater pollution, a major problem in the state. Washington State officials who manage the Drinking Water SRF program told us that six of their Recovery Act projects addressed arsenic drinking water contamination, a major water problem in the state. Although the Recovery Act did not require states to target Clean and Drinking Water SRF program funds to economically disadvantaged communities, six of the nine states that we reviewed distributed more than $123 million in clean water funds, and eight of the nine states distributed almost $78 million in drinking water funds under the SRF Recovery Act programs to these communities. This amount represents about 24 percent of the almost $832 million in Recovery Act funds that the states were awarded. As shown in table 1, a large majority of the funds provided to these communities were provided as additional subsidies—grants, principal forgiveness, and negative interest loans. According to officials in five of the nine states we reviewed, their states provided additional subsidies to economically disadvantaged communities because the communities would otherwise have had a difficult time funding projects. For example, New Mexico officials told us that they directed additional drinking water subsidies to economically disadvantaged communities because these communities have historically lacked access to capital. Officials in Nevada told us such communities not only have a difficult time funding projects, they also have some of the projects with the highest priority for addressing public health and environmental protection concerns. In addition, officials in a few other states told us that economically disadvantaged communities often lack the financial means to pay back loans from the SRF programs or lack funds to pay for the upfront costs of planning and designing a project. Officials in at least two states also said that many economically disadvantaged communities lack full-time staff to help manage the water infrastructure. Even with the additional subsidies available for projects, officials in a few states said that economically disadvantaged communities found it difficult to obtain Recovery Act funds. For example, Missouri officials told us that the Recovery Act deadline was the single most important factor hindering the ability of these communities from receiving funding. New Mexico officials also told us that because these communities typically do not have funds to plan and develop projects, few could meet the deadline, and several projects that sought Recovery Act funds could not be awarded funding owing to the deadline. We gathered information on economically disadvantaged communities from the nine states we reviewed because EPA did not collect the information. In April 2011, the EPA Office of Inspector General (OIG) reported that EPA could not assess the overall impact of Recovery Act funds on economically disadvantaged communities because the agency did not collect data on the amount of Clean and Drinking Water SRF program funds distributed to these communities nationwide. The OIG recommended that EPA establish a system that can target program funds to its objectives and priorities, such as funding economically disadvantaged communities. For the quarter ending December 2009 through the quarter ending June 2010, the number of FTEs paid for with Recovery Act SRF program funds increased each reporting quarter, from about 6,000 to 15,000 FTEs for planning, designing, and building water projects (see fig. 9). As projects were completed and funds spent, the number of FTEs funded had declined to about 6,000 for the quarter ending March 2011. Following OMB guidance, states reported on FTEs directly paid for with Recovery Act funding, not the employment impact on suppliers of materials (indirect jobs) or on the local communities (induced jobs). In addition, state officials told us that, although funding varies from project to project, as much as 80 percent of a project’s funding generally is used for materials— such as cement for buildings and equipment such as turbines, pumps, and centrifuges—and the remainder pays for labor or FTEs. As Recovery Act Clean Water and Drinking Water SRF program funds have been spent over the last 2 years, EPA officials have monitored projects and spending activity and found that states have generally complied with Recovery Act requirements. Similarly, in the nine states we reviewed, state officials indicated that the site visits they made to monitor Recovery Act projects found few problems. Furthermore, state auditors in the nine states we reviewed continue to monitor and oversee the use of Recovery Act funds, and their reports showed few significant findings. Since the Recovery Act was enacted, EPA officials have reviewed all 50 states’ Recovery Act Clean and Drinking Water SRF programs at least once and have found that states are largely complying with the act’s requirements. In our May 2010 report, we recommended that EPA work with the states to implement specific oversight procedures to monitor and ensure subrecipients’ compliance with provisions of the Recovery Act- funded Clean Water and Drinking Water SRF programs. EPA updated its oversight plan for Recovery Act funds, in part, as a response to our recommendation. The plan describes the following monitoring actions for the Recovery Act Clean and Drinking Water SRF programs: EPA headquarters staff should visit both SRF programs in every region in fiscal years 2010 and 2011, review all states’ Clean Water SRF programs and all states’ Drinking Water SRF programs for these years, and provide training and technical assistance, as needed. Although the oversight plan recommends headquarters staff visit all regions in 2011, EPA officials decided instead to provide regional training on program eligibility requirements. The officials said that they had visited the regions once and saw greater benefit in providing training. EPA’s Office of Wastewater Management and Office of Ground Water and Drinking Water will report bimonthly to the Assistant Administrator for Water on oversight activities. Regional staff should conduct state reviews twice a year using an EPA- provided checklist or comparable checklist, examine four project files, and conduct four transaction tests, which can be used to test if an internal control is working or if a dollar error has occurred in the processing of a transaction. In addition, regional staff are to discuss each state’s inspection process and audit findings with state officials, and update headquarters staff on any findings. The regions are to submit to headquarters (1) program evaluation reports, which describe how states are managing their Recovery Act SRF funds and projects; (2) Recovery Act project review checklists, to examine compliance with Recovery Act requirements; and (3) transaction testing forms, to determine if any erroneous payments were made. Regional staff should conduct at least one site inspection of a clean water project and a drinking water project in each state each year. According to our review of the Clean and Drinking Water SRF program evaluation reports for the 50 states, EPA regional officials generally carried out the instructions in EPA’s oversight plan. As of June 1, 2011, these officials had visited most state programs twice, although they visited some state programs only once or did not have documentation of the visits. During visits, officials reviewed the files for proper documentation pertaining to Davis-Bacon, Buy American, and green project requirements. Additionally, although not required to do so by the oversight plan, regional officials attempted to visit at least one clean water and one drinking water SRF Recovery Act project in every state each year. Headquarters officials said that the regional staff met this goal for drinking water projects in 2010, but they were not able to visit a clean water project in each state because of time and budget constraints. EPA headquarters officials said that they oversaw each region’s activities by visiting the regional offices to review files on the states. Headquarters officials told us that when they visited regional offices, they checked whether key state documents were maintained in the region’s state file, such as the Recovery Act grant application and any accompanying amendments; the state’s intended use plan, which details a state’s planned use of the funds, including the criteria for ranking projects and a list of ranked projects; and a copy of the grant award and conditions. Furthermore, headquarters officials said that they used a regional review checklist to examine each region’s oversight practices by, for example, determining whether the regions received and reviewed states’ analyses of costs (business cases) and if the regions ensured that the states updated key reporting data for their Recovery Act projects each quarter. Headquarters officials also said that they briefly reviewed the Drinking Water and Clean Water SRF program evaluation reports when they reviewed the regions’ activities. Headquarters officials said they had imposed a 60-day time frame for completing these reports because the regional staff were not submitting the reports in a timely manner. Additionally, the EPA OIG is conducting performance audits of EPA’s and states’ use of Recovery Act funds for the Clean and Drinking Water SRF programs and unannounced site inspections of Recovery Act-funded projects. Between May 1, 2010, and May 1, 2011, the OIG has conducted eight unannounced site visits. Six of the eight visits yielded no findings. The OIG issued recommendations for the other two projects: In a visit to Long Beach, California, the OIG found that a contractor did not fully comply with federal and state prevailing wage requirements, which resulted in underpayments to employees. The OIG recommended that EPA require the California State Water Resources Control Board to verify that the city is implementing controls to ensure compliance with prevailing wage requirements. In a visit to Astoria, Oregon, the OIG found that the city understated the number of FTEs created or retained with Recovery Act funds. In addition, the OIG found that a change order for one of four contracts awarded did not meet applicable procurement requirements. The OIG recommended that EPA Region 10 require the Oregon Department of Environmental Quality to require the city to correct the number of FTEs and report the corrected number to the federal government. The OIG also recommended that the regional administrator of EPA Region 10 require the Oregon Department of Environmental Quality to disallow the costs incurred under the change order unless Astoria was able to show that the costs met applicable Oregon requirements. Officials for EPA, the Oregon Department of Environmental Quality, and the city concurred with the corrective actions. The Chairman of the Recovery Accountability and Transparency Board testified in June 2011 that there has been a low level of fraud involving Recovery Act funds. He noted that less than half a percent of all reported Recovery Act contracts, grants, and loans had open investigations and only 144 convictions—involving about $1.9 million of total Recovery Act funds for all programs—had resulted. As the EPA Inspector General noted in May 2011, however, fraud schemes can take time to surface. The Inspector General cited an ongoing investigation of a foreign company that received over $1.1 million in contracts for equipment to be used in wastewater treatment facilities across the United States after falsely certifying that the equipment met the Recovery Act Buy American provision. Furthermore, the Inspector General also testified that EPA Region 6 officials identified, through a hotline tip, $1 million in unallowable grant costs charged by seven subrecipients. These funds have been reprogrammed by the state for other uses. EPA’s oversight plan indicates that state officials should visit each project site at least once per year, and suggests that state officials review the items on EPA’s state Recovery Act inspection checklist, or a similar state- specific checklist. According to the plan, state officials should complete the checklist and inform regional offices of any issues encountered in the oversight reviews, inspections, or audits. According to program officials in the nine states we reviewed, the clean and drinking water SRF projects they reviewed largely complied with Recovery Act requirements. The officials said that they inspected each Recovery Act project site at least once during the course of project construction, and sometimes more frequently, depending on the complexity of the project. These officials also said that, using the EPA or other checklist, they evaluated whether the communities or subrecipients were meeting Recovery Act reporting requirements. For example, according to the checklist, officials verified whether subrecipients submitted FTE information to the state each quarter, and whether they submitted regular reports certifying that the project remained in compliance with the Davis-Bacon provisions, based on a weekly review of payroll records. In addition, the officials used the checklist to review the contents of project files and ensure that key project documents were present, such as project-specific waivers. Using the checklist, these officials also confirmed that projects receiving green infrastructure funding properly incorporated green components. In addition, officials in Alabama, Connecticut, Nevada, and New Mexico took photographs of various project components to record compliance with the Buy American provisions. A few officials in the nine states that we reviewed said that meeting the oversight plan requirements, such as increasing the number of site visits, has been time-consuming. However, a couple of officials said that their site visits have resulted in better subrecipient compliance with Recovery Act requirements. For example, as a result of their site visits, state officials corrected a problem they had identified—subrecipients in three of the nine states we reviewed had foreign components on site: In New Mexico, officials told us that foreign components had been shipped to a project site, and that they had to replace the components before incorporating them into the project. Missouri officials said that the EPA inspection checklist had helped to identify some foreign-made components on a project site, and the components were replaced. Connecticut officials told us that they had identified a drinking water project that contained Chinese and German equipment valued at $10,000. They said that the project was already in service, making replacement costly and impractical because it would require consumers to be without water. The state is working with EPA to resolve the matter. State auditors—or private auditors contracted by the states—helped ensure the appropriate use of Recovery Act water funds. For eight of the nine states that we reviewed, we received state or private audits that examined the Recovery Act Clean and Drinking Water SRF programs. With the following two exceptions, the auditors have reported few significant problems: Michigan. In its audit of the Michigan Department of Environmental Quality’s fiscal year 2008 and 2009 financial statements, the Michigan Office of the Auditor General reported several material weaknesses in internal controls and material noncompliance with requirements related to subrecipient monitoring and other special provisions for Recovery Act-funded expenditures. For example, for the Recovery Act Clean and Drinking Water SRF programs, the auditors found that the Michigan Department of Environmental Quality overstated the number of FTEs for the reporting period ending September 30, 2009, because its methodology for calculating FTEs was not in accordance with June 2009 OMB guidance. The auditors also found that the department did not have a process to (1) verify the accuracy of the information contained in its recipient report; (2) adequately monitor subrecipients’ expending of Recovery Act funds for construction activities to ensure that the subrecipients complied with the Davis-Bacon provisions; and (3) adequately monitor subrecipients’ expending of Recovery Act funds for the construction, alteration, maintenance, or repair of a public building or public work to ensure that the subrecipients complied with Buy American provisions. In response to these findings, the auditors recommended that the department improve its internal control over the SRF programs to ensure compliance with federal laws and regulations. The department partially or wholly agreed with these findings, and anticipated taking the appropriate corrective action by September 30, 2011. One Michigan official said that corrective action has been implemented for the findings that pertain to the SRF program. Washington State. In the November 2010 Financial Statements and Federal Compliance report for the Drinking Water SRF program, auditors found significant deficiencies in the Department of Health’s internal control. As a result, they recommended that the Department of Health train employees on financial reporting preparation and requirements; establish and follow internal controls, including an appropriate, independent review of the financial statements and related schedules; and establish policies and procedures related to the preparation of the year-end financial statements. The Department of Health concurred with the finding, and stated that it would take appropriate action. In the corresponding report for the Clean Water SRF program, auditors found no internal control weaknesses. To meet our mandate to comment on recipient reports, we have continued monitoring recipient-reported data, including data on jobs funded. For this report, we focused our review on SRF program funds and EPA and state efforts to conduct data quality reviews and identify and remediate reporting problems. According to EPA officials, the overall quality of the states’ SRF data on Recovery.gov, which EPA officials have checked each quarter, is stable. The officials said that the states’ initial unfamiliarity with a newly developed reporting system has been resolved, the Federalreporting.gov help desk has improved, and guidance issued by OMB has clarified reporting issues over time. During the seventh round of reporting, which ended on March 31, 2011, EPA officials continued to perform data quality checks as they had in previous quarters. Specifically, EPA used data from the agency’s grants database, contracts database, and financial management system to compare with recipient-reported data. These systems contain authoritative data for every award made to the states, including the award identification number, award date, award amount, outlays, Treasury Account Symbol codes, and recipient names. According to EPA officials, they use the agency data to ensure that recipient-reported information for a given award corresponds with the information on EPA’s official award documents. EPA staff can raise questions about any inconsistent data through the Federalreporting.gov system. State recipients may make appropriate changes to the data through the end of the reporting period, and after public release, during the continuous correction cycle. According to EPA officials, this process has resolved any questions and comments from EPA’s reviews. To facilitate its oversight of state-reported data, EPA required states to use its Clean Water Benefits Reporting (CBR) system and Program Benefits Reporting (PBR) system to report on certain Recovery Act project information, such as the project name, contract date, construction start, Recovery Act funding, jobs created or retained, and project purpose and anticipated benefits. EPA officials said that they do not routinely collect state expenditure data in these systems and that they rely on regional officials to review expenditures reported by the states on Recovery.gov. We compared EPA’s data on awards and funds drawn down by states with data reported by states on Recovery.gov and found only a few minor inconsistencies in the data. Similarly, in September 2010, EPA’s OIG reported that the Recovery.gov data for EPA’s SRF programs contained a low rate of errors. The OIG audited EPA’s controls for reviewing recipient-reported data after the second round of reporting, which ended December 31, 2009, comparing EPA data on award type, award number, funding agency code, award agency code, and award amount to state-reported data on Recovery.gov. The OIG report found that EPA’s controls helped lower the rate of errors for these key data and recommended some improvements to EPA’s process. EPA’s Clean and Drinking Water SRF program officials said that they have had few errors in the SRF data in the last three rounds of reporting. Officials in the nine states we reviewed indicated that the quality of recipient data has remained relatively stable, although we found that the states differed in how they reported state agencies’ FTE data and did not report some subrecipients’ FTE data. Water program officials in these states said that they check the quality of data that are reported on Federalreporting.gov and then Recovery.gov. In addition, officials in Alabama, Connecticut, Maryland, Missouri, and New Mexico said that they examined payroll data submitted by contractors to verify FTE data. In some cases, state officials said that they contact subrecipients for clarification about data that are missing or inconsistent. In addition to department-level checks, in most of the nine states we reviewed, state-level Recovery Office staff checked the data before submitting the information to Federalreporting.gov. In four of the nine states—Alabama, Maryland, Missouri, and New Mexico—Recovery Office staff monitored Recovery Act implementation and performed independent data quality checks of the data reported by state agencies. According to several state officials, this reporting structure provided an additional level of review of state agency data. In Maryland, for example, officials said that their state-level reporting system relieves subrecipients of certain reporting duties. Subrecipients submitted the FTE and payroll information to Maryland’s StateStat office, and staff in that office reviewed and validated the data, completed the required federal reports, and submitted them to Federalreporting.gov. Furthermore, for control purposes, only two staff members handled the information. In addition, staff in Nevada’s Recovery Office conducted quality checks; however, each state agency then submitted its data directly to the appropriate federal agency. The remaining four states—Connecticut, Michigan, Washington State, and Wyoming—did not have a Recovery Office staff check data quality. We found minor problems with the FTE data that some of the nine states reported. Specifically, (1) states differed in how they reported the FTEs associated with their own program staff—that is, those who conduct document reviews, site inspections, and other key program duties; and (2) three states identified missing or incorrectly reported FTE data on Recovery.gov, and these data have not been corrected. In particular: Six of the nine states reported the FTEs for their state employees who were paid with Recovery Act funds, while two states did not. Officials in Maryland and Michigan noted that they did not report all the time their state employees spent on program activities in Federalreporting.gov, although these FTEs were paid for with Recovery Act funds. EPA officials said that they provided states with OMB guidance and that OMB guidance requires states to report FTEs paid for with Recovery Act funds. Washington State officials who administer the Clean Water SRF program changed the time frame for reporting FTE data in the third round of reporting, and as a result, missed reporting one quarter of data. During the first two reporting rounds, because some subrecipients were finding it difficult to submit complete FTE data to the state by the state’s deadline, staff reported data from 2 months of the current quarter and 1 month of the previous quarter. During the third reporting quarter, the state began reporting 3 months of current data. However, the state received data from a subrecipient after the deadline for reporting and did not correct the data during the correction period. As a result, officials said about 18 FTEs remain unreported. EPA officials told them to keep a record of these FTEs in case there is an opportunity to correct the data. Officials in New Mexico did not report a few FTEs for the state’s Drinking Water SRF program in the first two rounds of reporting. The officials explained that the revisions were submitted to the state after the reporting period ended and therefore the FTEs were not captured in Recovery.gov. Officials in Wyoming identified incorrectly reported FTEs for two quarters. The FTEs were incorrect because the state entered the data for one clean water project for one quarter in the next quarter. As a result, one quarter’s data were overstated by a few FTEs, and the other quarter’s data were understated by a few FTEs. The state official explained that the data changed after they were initially reported in Recovery.gov and were not updated during the correction period. As the bulk of Recovery Act funding is spent, EPA officials said that the states were beginning to complete their projects. Officials said that before the next reporting round begins in July 2011, they plan to issue a memorandum to states on how to complete their Recovery Act grants and when to stop reporting to Recovery.gov. During the seventh round of reporting, one state in each program indicated in Recovery.gov that the grant—including all projects that received money from the grant—was complete. EPA officials told us that as of early May 2011, 629 clean water and 383 drinking water projects have been completed across all states. Some state officials charged with coordinating state-level Recovery Act funds also said that they are winding down their activities. In Michigan, for example, the Recovery Office was originally a separate office under the Governor, but has since been moved under the Department of Management and Budget. In Nevada, the Recovery Act Director said that his office will be eliminated at the end of June 2011. At that point, the Department of Administration’s centralized grant office will take responsibility for Nevada’s remaining Recovery Act efforts. Similarly, officials at the New Mexico Office of Recovery and Reinvestment said that their office is currently funded through the Recovery Act State Fiscal Stabilization Fund through the end of June 2011. Because of the high-level nature of SRF recipient reporting for Recovery.gov and the availability of information in its own data systems, EPA officials do not anticipate using data from Recovery.gov. The officials said that whereas Recovery.gov summarizes information on many projects at the state level, the data from CBR and PBR are more useful for understanding states’ projects than data on Recovery.gov because the internal data are provided by project and include more detail. EPA officials said that by the end of 2011 they plan to use information in these two internal systems to assess anticipated benefits of the Recovery Act SRF program funds. EPA Clean Water officials said that they would perform case studies of completed projects and assess anticipated benefits. Drinking Water officials said that they are considering three major studies, some joint with the Clean Water SRF program. These studies may include assessments of project distributions, green projects’ benefits, and subsidy beneficiaries. Our May 2010 report identified the challenge of maintaining accountability for Recovery Act funds and recommended improved monitoring of Recovery Act funds by EPA and the states. As we note above, our current work shows that EPA and the nine states we reviewed have made progress in addressing this challenge. Two challenges EPA and state officials identified in spending Recovery Act SRF program funds may continue as requirements introduced with the Recovery Act are incorporated into the base SRF programs. Specifically, in fiscal years 2010 and 2011, the Clean and Drinking Water SRF programs were required to include provisions for green projects and additional subsidies. Encouraging green projects. The effort to support green projects was included in EPA’s fiscal year 2010 and 2011 appropriations for the base Clean and Drinking Water SRF programs. As we discussed above, under the requirement to fund green projects in the Recovery Act, in certain cases state officials said they had to choose between a green water project and a project that was otherwise ranked higher to address water quality problems. Similarly, in our May 2010 report, we found that officials in some of the states we reviewed said that they gave preference to green projects for funding purposes, and sometimes ranked those projects above another project with higher public health benefits. In addition to competing priorities for funding, EPA’s OIG found, in its February 2010 report, that a lack of clear guidance on the “green requirement” caused confusion and disagreements as to which projects were eligible for green funding. Officials in two of the nine states we reviewed noted that the goal of supporting green projects was not difficult to achieve because they had already identified green projects. Officials in four other states said that while they all met the 20 percent green project goal, it was difficult to achieve, leading one official to suggest that green projects be encouraged without setting a fixed percentage of program funds. EPA officials added that they had also heard that achieving the green requirement may continue to be difficult in some states, particularly for the Drinking Water program. However, the officials also said that they were encouraging states to include green components in their drinking water projects rather than seeking solely green projects. Providing additional subsidies. The fiscal years 2010 and 2011 appropriations for the Clean and Drinking Water SRF programs also continued the requirement to provide additional subsidies in the form of principal forgiveness, negative interest loans, or grants. The subsidy provisions reduced the funds available to use as a subsidy from a minimum of 50 percent of funds required under the Recovery Act to a minimum of 30 percent of base SRF program funds. As with the Recovery Act, the appropriations in fiscal years 2010 and 2011 do not require this additional subsidy to be targeted to any types of projects or communities with economic need, and as the recent EPA OIG report notes, there are no requirements for EPA or the states to track how these subsidies are used. The base Clean and Drinking Water SRF programs were created to be a sustainable source of funding for communities’ water and wastewater infrastructure through the continued repayment of loans to states. Officials in four of the nine states we reviewed identified a potential challenge in continuing to provide a specific amount of subsidies while sustaining the Clean and Drinking Water SRF programs as revolving funds. State officials pointed out that when monies are not repaid into the revolving fund, the reuse of funds is reduced and the purpose of the revolving SRF program changes from primarily providing loans for investments in water infrastructure to providing grants. We provided a draft of the report to the Environmental Protection Agency for review and comment. EPA stated that it did not have any comments on our report. We are sending copies of this report to the appropriate congressional committees, Administrator of the Environmental Protection Agency, and other interested parties. In addition, this report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. The objectives of this review were to examine the (1) status and use of American Recovery and Reinvestment Act of 2009 (Recovery Act) Clean and Drinking Water State Revolving Funds (SRF) program funds nationwide and in selected states; (2) actions taken by federal, state, and other agencies to monitor and ensure accountability of these program funds; (3) approaches federal agencies and selected states have taken to ensure data quality, including data for jobs reported by recipients of these program funds; and (4) challenges, if any, that states have faced in implementing Recovery Act requirements for the Clean and Drinking Water SRF programs. To examine the status and use of Recovery Act funds nationwide and in selected states, we reviewed relevant Clean and Drinking Water SRF federal laws, regulations, and guidance, and examined federal and selected state program and project documentation. We interviewed Environmental Protection Agency (EPA) officials responsible for administering programs in headquarters. We also interviewed state Recovery Act officials and state program officials, in environmental and public health departments, who are responsible for revolving loan fund programs. We obtained and analyzed nationwide Recovery Act data from the EPA Clean Water SRF Benefits Reporting (CBR) system and the Drinking Water SRF Project Benefits Reporting (PBR) system for all states. These data included (1) categories of clean and drinking water infrastructure and green projects; (2) Recovery Act funds awarded and drawn down from the Treasury; (3) amount of subsidization (principal forgiveness or grants and low- or no-interest loans); and (4) number of full-time equivalents (FTE). We also obtained and analyzed key nationwide data from the EPA National Information Management System on Recovery Act funding by type of clean water project. Using these data, we summarized the amount of Recovery Act funds provided by states to clean and drinking water SRF projects by category of project (e.g., clean water sanitary sewer overflow and drinking water treatment). We assessed these data for their reliability and determined that they were reliable for our purposes. To develop a more in-depth view of the states’ use of Recovery Act funds for Clean and Drinking Water SRF programs, we selected a nonprobability sample of nine states we had not reviewed in our previous bimonthly reports, representing all but 1 of the 10 EPA regions. The states we selected were Alabama, Connecticut, Maryland, Michigan, Missouri, New Mexico, Nevada, Washington State, and Wyoming. For each state, we interviewed officials from the state environmental department or public health program (water program officials) to discuss their use of Recovery Act SRF program funds. We conducted these interviews using a data collection instrument to obtain consistent information from the states on their water problems and ranking systems for prioritizing projects for funding; the amount of funds provided to projects; the allocation of funding and subsidization to green projects, small communities, and economically disadvantaged communities; the amount of funds received and spent, and the number of FTE positions funded for each project and in total. Additionally, in Alabama, Maryland, and New Mexico, we visited a total of five clean and drinking water projects funded with Recovery Act funds. To examine the actions that federal, state, and other agencies have taken to monitor and ensure accountability for Recovery Act SRF program funds, we reviewed and analyzed relevant federal guidance and documentation, including EPA’s oversight plan for Recovery Act projects. To determine whether EPA was following its oversight plan, we reviewed at least one EPA Recovery Act program evaluation report for the Clean Water and Drinking Water programs for all 50 states for fiscal years 2009 or 2010. We also reviewed EPA headquarters’ reviews of regional reports that detailed the performance of regional drinking water staff as they monitored and documented the states’ implementation of the Drinking Water SRF program, and we asked headquarters staff about the reviews of regional clean water staff that they conducted, but did not document. To develop a more in-depth view of the states’ monitoring processes, we asked program officials in the nine states to respond to questions about their oversight activities in our data collection instrument. We then interviewed state program officials who were responsible for monitoring and oversight about their oversight activities and efforts to ensure that projects complied with Recovery Act requirements, including their processes for inspecting project sites and their procedures for collecting and reporting Recovery Act SRF program data. In addition, we interviewed Recovery Act officials in the six states that had such staff—Alabama, Maryland, Missouri, Nevada, New Mexico, and Washington—about their oversight of program staff, data quality, and federal reporting during additional interviews. Furthermore, to develop an understanding of the work that the broader audit community has completed on the Recovery Act Clean and Drinking Water SRF programs, we reviewed all relevant EPA Office of Inspector General reports that were published since we issued our previous report on the Recovery Act SRF programs in May 2010. To examine approaches federal agencies and selected states have taken to ensure data quality for jobs reported by Recovery Act recipients, we conducted work at both the national and state level. The recipient reporting section of this report responds to the Recovery Act’s mandate that we comment on the estimates of jobs created or retained by direct recipients of Recovery Act funds. For our national review of the seventh submission of recipient reports, covering the period from January 1, 2011, through March 31, 2011, we continued our monitoring of errors or potential problems by repeating many of the analyses and edit checks reported in our six prior reviews covering the period from February 2009 through December 31, 2010. To examine how the quality of jobs data reported by recipients of Clean and Drinking Water SRF grants has changed over time, we compared the seven quarters of recipient reporting data that were publicly available at Recovery.gov on April 30, 2011. We performed edit checks and other analyses on the Clean and Drinking Water SRF prime recipient reports and compared funding data from EPA with funding amounts reported on the recipient reports. We also reviewed documentation and interviewed federal agency officials from EPA who are responsible for ensuring a reasonable degree of quality across their programs’ recipient reports. At the state level, we interviewed state officials in the nine states we reviewed about the policies and procedures they had in place to ensure that FTE information for Recovery Act projects was reported accurately. For selected Recovery Act data fields, we asked state program officials in the nine states to review and verify EPA’s Recovery Act data from CBR and PBR and provide corrected data where applicable. For the nine states, we compared state-reported Clean and Drinking Water SRF FTE data from the sixth submission of recipient reports, the period covering October 1, 2010, through December 31, 2010, with corresponding data reported on Recovery.gov. We addressed any discrepancies between these two sets of data by contacting state program officials. Our national and state work in selected states showed agreement between EPA recipient information and the information reported by recipients directly to Federalreporting.gov. In general, we consider the data used to be sufficiently reliable for purposes of this report. The results of our FTE analyses are limited to the two SRF water programs and time periods reviewed and are not generalizable to any other program’s FTE reporting. To examine challenges that states have faced in implementing Recovery Act requirements, we interviewed state SRF program officials using a data collection instrument and obtained information on challenges state program officials told us pertaining to the 20 percent green project requirement and the subsidization requirement. We conducted this performance audit from September 2010 through June 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In this appendix, we update the status of agencies’ efforts to implement the 26 open recommendations, and 2 newly implemented recommendations from our previous bimonthly and recipient reporting reviews. Recommendations that were listed as implemented or closed in a prior report are not repeated here. Lastly, we address the status of our Matters for Congressional Consideration. Given the concerns we have raised about whether program requirements were being met, we recommended in May 2010 that the Department of Energy (DOE), in conjunction with both state and local weatherization agencies, develop and clarify weatherization program guidance that clarifies the specific methodology for calculating the average cost per home weatherized to ensure that the maximum average cost limit is applied as intended. accelerates current DOE efforts to develop national standards for weatherization training, certification, and accreditation, which is currently expected to take 2 years to complete. develops a best practice guide for key internal controls that should be present at the local weatherization agency level to ensure compliance with key program requirements. sets time frames for development and implementation of state monitoring programs. revisits the various methodologies used in determining the weatherization work that should be performed based on the consideration of cost- effectiveness and develops standard methodologies that ensure that priority is given to the most cost-effective weatherization work. To validate any methodologies created, this effort should include the development of standards for accurately measuring the long-term energy savings resulting from weatherization work conducted. In addition, given that state and local agencies have felt pressure to meet a large increase in production targets while effectively meeting program requirements and have experienced some confusion over production targets, funding obligations, and associated consequences for not meeting production and funding goals, we recommended that DOE clarify its production targets, funding deadlines, and associated consequences while providing a balanced emphasis on the importance of meeting program requirements. DOE generally concurred with these recommendations and has made some progress on implementing them. For example, to clarify the methodology for calculating the average cost per home, DOE has developed draft guidance to help grantees develop consistency in their average cost per unit calculations. The guidance further clarifies the general cost categories that are included in the average cost per home. DOE anticipates issuance of the guidance in June 2011. DOE has also taken steps to address our recommendation that it develop and clarify guidance to generate a best practice guide for key internal controls. DOE distributed a memorandum dated May 13, 2011 to grantees reminding them of their responsibilities to ensure compliance with internal controls and the consequences of failing to do so. This memo is currently under internal review and DOE anticipates it will be released in May 2011. To better ensure that Energy Efficiency and Conservation Block Grant (EECBG) funds are used to meet Recovery Act and program goals, we recommended in April 2011 that DOE, take the following actions: Explore a means to capture information on the monitoring processes of all recipients to make certain that recipients have effective monitoring practices. Solicit information from recipients regarding the methodology they used to calculate their energy-related impact metrics and verify that recipients who use DOE’s estimation tool use the most recent version when calculating these metrics. DOE generally concurred with these recommendations, stating that “implementing the report’s recommendations will help ensure that the Program continues to be well managed and executed.” DOE also provided additional information on steps it has initiated or planned to implement. In particular, with respect to our first recommendation, DOE elaborated on additional monitoring practices it performs over high dollar value grant recipients, such as its reliance on audit results obtained in accordance with the Single Audit Act and its update to the EECBG program requirements in the Compliance Supplement to OMB Circular No. A-133. However, these monitoring practices only focus on larger grant recipients, and we believe that the program could be more effectively monitored if DOE captured information on the monitoring practices of all recipients. With respect to our second recommendation, DOE officials said that in order to provide a reasonable estimate of energy savings, the program currently reviews energy process and impact metrics submitted each quarter for reasonableness, works with grantees to correct unreasonable metrics, and works with grantees through closeout to refine metrics. In addition, DOE officials said that they plan to take a scientific approach to overall program evaluation during the formal evaluation process at the conclusion of the program, which will occur in December 2012. However, DOE has not yet identified any specific plans to solicit information from recipients regarding the methodology they used to calculate their energy- related impact metrics or to verify that recipients who use DOE’s estimation tool use the most recent version when calculating. We recommended that the Environmental Protection Agency (EPA) Administrator work with the states to implement specific oversight procedures to monitor and ensure subrecipients’ compliance with the provisions of the Recovery Act-funded Clean Water and Drinking Water State Revolving Fund (SRF) program. In part in response to our recommendation, EPA provided additional guidance to the states regarding their oversight responsibilities, with an emphasis on enhancing site-specific inspections. Specifically, in June 2010, the agency developed and issued an oversight plan outline for Recovery Act projects that provides guidance on the frequency, content, and documentation related to regional reviews of state Recovery Act programs and regional and state reviews of specific Recovery Act projects. We found that EPA regions have reviewed all 50 states’ Clean and Drinking Water SRF programs at least once since the Recovery Act was enacted, and have generally carried out the oversight instructions in EPA’s plan. For example, regional officials reviewed files with state documents and information to ensure proper controls over Davis-Bacon, Buy American, and other Recovery Act requirements. Regional staff also visited one drinking water project in every state, but did not meet this goal for clean water projects due to time and budget constraints. We also found that EPA headquarters officials have been reviewing the regions’ performance evaluation reports for states, and the officials said that they implemented a 60-day time frame for completing these reports. In the nine states that we reviewed in this report, program officials described their site visits to projects and the use of the EPA inspection checklist (or state equivalent), according to EPA’s oversight plan. State officials told us that they visit their Recovery Act projects at least once during construction and sometimes more frequently depending on the complexity of the project. We consider these agency actions to have addressed our recommendation. To oversee the extent to which grantees are meeting the program goal of providing services to children and families and to better track the initiation of services under the Recovery Act, we recommended that the Director of the Office of Head Start (OHS) should collect data on the extent to which children and pregnant women actually receive services from Head Start and Early Head Start grantees. The Department of Health and Human Services (HHS) disagreed with our recommendation. OHS officials stated that attendance data are adequately examined in triennial or yearly on-site reviews and in periodic risk management meetings. Because these reviews and meetings do not collect or report data on service provision, we continue to believe that tracking services to children and families is an important measure of the work undertaken by Head Start and Early Head Start service providers. To help ensure that grantees report consistent enrollment figures, we recommended that the Director of OHS should better communicate a consistent definition of “enrollment” to grantees for monthly and yearly reporting and begin verifying grantees’ definition of “enrollment” during triennial reviews. OHS issued informal guidance on its Web site clarifying monthly reporting requirements to make them consistent with annual enrollment reporting. While this guidance directs grantees to include in enrollment counts all children and pregnant mothers who have received a specified minimum of services, it could be further clarified by specifying that counts should include only those children and pregnant mothers. According to HHS officials, OHS is considering further regulatory clarification. To provide grantees consistent information on how and when they will be expected to obligate and expend federal funds, we recommended that the Director of OHS should clearly communicate its policy to grantees for carrying over or extending the use of Recovery Act funds from one fiscal year into the next. HHS indicated that OHS will issue guidance to grantees on obligation and expenditure requirements, as well as improve efforts to effectively communicate the mechanisms in place for grantees to meet the requirements for obligation and expenditure of funds. To better consider known risks in scoping and staffing required reviews of Recovery Act grantees, we recommended that the Director of OHS should direct OHS regional offices to consistently perform and document Risk Management Meetings and incorporate known risks, including financial management risks, into the process for staffing and conducting reviews. HHS reported that OHS is reviewing the risk management process to ensure it is consistently performed and documented in its centralized data system and that it has taken related steps, such as requiring the Grant Officer to identify known or suspected risks prior to an on-site review. To facilitate understanding of whether regional decisions regarding waivers of the program’s matching requirement are consistent with Recovery Act grantees’ needs across regions, we recommended that the Director of OHS should regularly review waivers of the nonfederal matching requirement and associated justifications. HHS reports that it has taken actions to address our recommendation. For example, HHS reports that OHS has conducted a review of waivers of the nonfederal matching requirement and tracked all waivers in the Web- based data system. HHS further reports that OHS has determined that they are reasonably consistent across regions. Because the absence of third-party investors reduces the amount of overall scrutiny Tax Credit Assistance Program (TCAP) projects would receive and the Department of Housing and Urban Development (HUD) is currently not aware of how many projects lacked third-party investors, we recommended that HUD should develop a risk-based plan for its role in overseeing TCAP projects that recognizes the level of oversight provided by others. HUD responded to our recommendation by saying it will identify projects that are not funded by the HOME Investment Partnerships Program (HOME) funds and projects that have a nominal tax credit award. However, HUD said it will not be able to identify these projects until it could access the data needed to perform the analysis, and it does not receive access to those data until after projects have been completed. HUD currently has not taken any action on this recommendation because it only has data on the small percentage of projects completed to date. It is too early in the process to be able to identify projects that lack third-party investors. The agency will take action once they are able to collect the necessary information from the project owners and the state housing finance agencies. To enhance the Department of Labor’s (Labor) ability to manage its Recovery Act and regular Workforce Investment Act (WIA) formula grants and to build on its efforts to improve the accuracy and consistency of financial reporting, we recommended that the Secretary of Labor take the following actions: To determine the extent and nature of reporting inconsistencies across the states and better target technical assistance, conduct a one-time assessment of financial reports that examines whether each state’s reported data on obligations meet Labor’s requirements. To enhance state accountability and to facilitate their progress in making reporting improvements, routinely review states’ reporting on obligations during regular state comprehensive reviews. Labor agreed with both of our recommendations and has begun to take some actions to implement them. To determine the extent of reporting inconsistencies, Labor awarded a contract in September 2010 to perform an assessment of state financial reports to determine if the data reported are accurate and reflect Labor’s guidance on reporting of obligations and expenditures. Since then, Labor has completed interviews with all states and is preparing a report of the findings. To enhance states’ accountability and facilitate their progress in making improvements in reporting, Labor has drafted guidance on the definitions of key financial terms such as “obligations,” which is currently in final clearance. After the guidance is issued, Labor plans to conduct a systemwide webinar and interactive training on this topic to reinforce how accrued expenditures and obligations are to be reported. Our September 2009 bimonthly report identified a need for additional federal guidance in defining green jobs and we made the following recommendation to the Secretary of Labor: To better support state and local efforts to provide youth with employment and training in green jobs, provide additional guidance about the nature of these jobs and the strategies that could be used to prepare youth for careers in green industries. Labor agreed with our recommendation and has begun to take several actions to implement it. Labor’s Bureau of Labor Statistics has developed a definition of green jobs which was finalized and published in the Federal Register on September 21, 2010. In addition, Labor continues to host a Green Jobs Community of Practice, an online virtual community available to all interested parties. As part of this effort, in December 2010, Labor hosted its first Recovery Act Grantee Technical Assistance Institute, which focused on critical success factors for achieving the goals of the grants and sustaining the impact into the future. The department also hosted a symposium on April 28-29, 2011, with the green jobs state Labor Market Information Improvement grantees. Symposium participants shared recent research findings, including efforts to measure green jobs, occupations, and training in their states. In addition, the department released a new career exploration tool called “mynextmove” (www.mynextmove.gov) in February 2011. This Web site includes the Occupational Information Network (O*NET) green leaf symbol to highlight green occupations. Furthermore, Labor’s implementation study of the Recovery Act-funded green jobs training grants is still ongoing. The interim report is expected in late 2011. To leverage Single Audits as an effective oversight tool for Recovery Act programs, we recommended that the Director of the Office of Management and Budget (OMB) 1. provide more direct focus on Recovery Act programs through the Single Audit to help ensure that smaller programs with higher risk have audit coverage in the area of internal controls and compliance; 2. take additional efforts to provide more timely reporting on internal controls for Recovery Act programs for 2010 and beyond; 3. evaluate options for providing relief related to audit requirements for low-risk programs to balance new audit responsibilities associated with the Recovery Act; 4. issue Single Audit guidance in a timely manner so that auditors can efficiently plan their audit work; 5. issue the OMB Circular No. A-133 Compliance Supplement no later than March 31 of each year; 6. explore alternatives to help ensure that federal awarding agencies provide their management decisions on the corrective action plans in a timely manner; and 7. shorten the timeframes required for issuing management decisions by federal agencies to grant recipients. (1) To provide more direct focus on Recovery Act programs through the Single Audit to help ensure that smaller programs with higher risk have audit coverage in the area of internal controls and compliance, the OMB Circular No. A-133, Audits of States, Local Governments, and Non-Profit Organizations 2010 Compliance Supplement (Compliance Supplement) required all federal programs with expenditures of Recovery Act awards to be considered as programs with higher risk when performing standard risk-based tests for selecting programs to be audited. The auditor’s determination of the programs to be audited is based upon an evaluation of the risks of noncompliance occurring that could be material to an individual major program. The Compliance Supplement has been the primary mechanism that OMB has used to provide Recovery Act requirements and guidance to auditors. One presumption underlying the guidance is that smaller programs with Recovery Act expenditures could be audited as major programs when using a risk-based audit approach. The most significant risks are associated with newer programs that may not yet have the internal controls and accounting systems in place to help ensure that Recovery Act funds are distributed and used in accordance with program regulations and objectives. Since Recovery Act spending is projected to continue through 2016, we believe that it is essential that OMB provide direction in Single Audit guidance to help to ensure that smaller programs with higher risk are not automatically excluded from receiving audit coverage based on their size and standard Single Audit Act requirements. In May 2011, we spoke with OMB officials and reemphasized our concern that future Single Audit guidance provide instruction that helps to ensure that smaller programs with higher risk have audit coverage in the area of internal controls and compliance. OMB officials agreed and stated that such guidance is included in the 2011 Compliance Supplement which was to be issued by March 31, 2011. On June 1, 2011, OMB issued the 2011 Compliance Supplement which contains language regarding the higher- risk status of Recovery Act programs, requirements for separate reporting of findings, and a list of Recovery Act programs to aid the auditors. We will continue to monitor OMB’s efforts to provide more direct focus on Recovery Act programs through the Single Audit to help ensure that smaller programs with higher risk have audit coverage in the area of internal controls and compliance. (2) To address the recommendation for taking additional efforts to encourage more timely reporting on internal controls for Recovery Act programs for 2010 and beyond, OMB commenced a second voluntary Single Audit Internal Control Project (project) in August 2010 for states that received Recovery Act funds in fiscal year 2010. Fourteen states volunteered to participate in the second project. One of the project’s goals is to achieve more timely communication of internal control deficiencies for higher-risk Recovery Act programs so that corrective action can be taken more quickly. Specifically, the project encourages participating auditors to identify and communicate deficiencies in internal control to program management 3 months sooner than the 9-month time frame currently required under OMB Circular No. A-133. Auditors were to communicate these through interim internal control reports by December 31, 2010. The project also requires that program management provide a corrective action plan aimed at correcting any deficiencies 2 months earlier than required under statute to the federal awarding agency. Upon receiving the corrective action plan, the federal awarding agency has 90 days to provide a written decision to the cognizant federal agency for audit detailing any concerns it may have with the plan. Each participating state was to select a minimum of four Recovery Act programs for inclusion in the project. We assessed the results of the first OMB Single Audit Internal Control Project for fiscal year 2009 and found that it was helpful in communicating internal control deficiencies earlier than required under statute. We reported that 16 states participated in the first project and that the states selected at least two Recovery Act programs for the project. We also reported that the project’s dependence on voluntary participation limited its scope and coverage and that voluntary participation may also bias the project’s results by excluding from analysis states or auditors with practices that cannot accommodate the project’s requirement for early reporting of control deficiencies. Overall, we concluded that although the project’s coverage could have been more comprehensive, the analysis of the project’s results provided meaningful information to OMB for better oversight of the Recovery Act programs selected and information for making future improvements to the Single Audit guidance. OMB’s second Single Audit Internal Control Project is in progress and its planned completion date is June 2011. OMB plans to assess the project’s results after its completion date. The 14 participating states have met the milestones for submitting interim internal control reports by December 31, 2010 and their corrective action plans by January 31, 2011. By April 30, 2011, the federal awarding agencies were to provide their interim management decisions to the cognizant agency for audit. We discussed the preliminary status of these interim management decisions with OMB officials and, as of May 24, 2011, only 1 of the 10 federal awarding agencies had submitted some management decisions on the auditees’ corrective action plans as required by the project’s guidelines. On May 24, 2011, officials from the cognizant agency for audit, HHS, reemphasized to the federal awarding agencies their responsibilities for providing management decisions in accordance with the project’s due dates. In our review of the 2009 project, we noted similar concerns that federal awarding agencies submitted management decisions on proposed corrective actions in an untimely manner and made recommendations in this area, which are discussed later in this report. We will continue to monitor the status of OMB’s efforts to implement this recommendation and believe that OMB needs to continue taking steps to encourage timelier reporting on internal controls through Single Audits for Recovery Act programs. (3) We previously recommended that OMB evaluate options for providing relief related to audit requirements for low-risk programs to balance new audit responsibilities associated with the Recovery Act. OMB officials have stated that they are aware of the increase in workload for state auditors who perform Single Audits due to the additional funding to Recovery Act programs and corresponding increases in programs being subject to audit requirements. OMB officials stated that they solicited suggestions from state auditors to gain further insights to develop measures for providing audit relief. However, OMB has not yet put in place a viable alternative that would provide relief to all state auditors that conduct Single Audits. For state auditors that are participating in the second OMB Single Audit Internal Control Project, OMB has provided some audit relief by modifying the requirements under Circular No. A-133 to reduce the number of low- risk programs to be included in some project participants’ risk assessment requirements. OMB is taking initiatives to examine the Single Audit process. OMB officials have stated that they have created a workgroup which combines the Executive Order 13520—Reducing Improper Payments Section 4 (b) Single Audit Recommendations Workgroup (Single Audit Workgroup), and the Circular No. A-87—Cost Principles for State, Local, and Indian Tribal Governments Workgroup (Circular No. A-87 Workgroup). The Single Audit Workgroup is comprised of representatives from the federal audit community; federal agency management officials involved in overseeing the Single Audit process and programs subject to that process; representatives from the state audit community; and staff from OMB. OMB officials tasked the Single Audit Workgroup with developing recommendations to improve the effectiveness of Single Audits of nonfederal entities that expend federal funds in order to help identify and reduce improper payments. In June 2010, the Single Audit Workgroup developed recommendations, some of which are targeted toward providing audit relief to auditors who conduct audits of grantees and grants that are under the requirements of the Single Audit Act. OMB officials stated that the recommendations warrant further study and that the workgroup is continuing its work on the recommendations. OMB officials also stated that the Circular No. A-87 Workgroup has also made recommendations which could impact Single Audits and that the workgroups have been collaborating to ensure that the recommendations relating to Single Audit improvements are compatible and could improve the Single Audit process. The combined workgroups plan to issue a report to OMB by August 29, 2011. We will continue to monitor OMB’s progress to achieve this objective. (4) (5) With regard to issuing Single Audit guidance in a timely manner, and specifically the OMB Circular No. A-133 Compliance Supplement, we previously reported that OMB officials intended to issue the 2011 Compliance Supplement by March 31, 2011. In December 2010, OMB provided to the American Institute of Certified Public Accounts (AICPA) a draft of the 2011 Compliance Supplement which the AICPA published on its Web site. In January 2011, OMB officials reported that the production of the 2011 Compliance Supplement was on schedule for issuance by March 31, 2011. OMB issued the 2011 Compliance Supplement on June 1, 2011. We spoke with OMB officials regarding the reasons for the delay of this important guidance to auditors. OMB officials stated that its efforts were refocused toward priorities relating to the expiration of several continuing resolutions that temporarily funded the federal government for fiscal year 2011, and the Department Of Defense And Full-Year Continuing Appropriations Act, 2011, which was passed by the Congress in April 2011, averting a governmentwide shutdown. OMB officials stated that, as a result, although they had taken steps to issue the 2011 Compliance Supplement by the end of March, such as starting the process earlier in 2010 and giving agencies strict deadlines for program submissions, they were only able to issue it on June 1, 2011. We will continue to monitor OMB’s progress to achieve this objective. (6) (7) In October 2010, OMB officials stated that, based on their assessment of the results of the project, they had discussed alternatives for helping to ensure that federal awarding agencies provide their management decisions on the corrective action plans in a timely manner, including possibly shortening the time frames required for federal agencies to provide their management decisions to grant recipients. However, OMB officials have yet to decide on the course of action that they will pursue to implement this recommendation. OMB officials acknowledged that the results of the 2009 OMB Single Audit Internal Control Project confirmed that this issue continues to be a challenge. They stated that they have met individually with several federal awarding agencies that were late in providing their management decisions in the 2009 project to discuss the measures that the agencies will take to improve the timeliness of their management decisions. Earlier in this report, we discussed that preliminary observations of the results of the second project have identified that several federal awarding agencies’ management decisions on the corrective actions that were due April 30, 2011, have also not been issued in a timely manner. In March 2010, OMB issued guidance under memo M-10-14, item 7, (http://www.whitehouse.gov/sites/default/files/omb/assets/memoranda_20 10/m1014.pdf) that called for federal awarding agencies to review reports prepared by the Federal Audit Clearinghouse regarding Single Audit findings and submit summaries of the highest-risk audit findings by major Recovery Act program, as well as other relevant information on the federal awarding agency’s actions regarding these areas. In May 2011, we reviewed selected reports prepared by federal awarding agencies that were titled Use of Single Audit to Oversee Recipient’s Recovery Act Funding. These reports were required by memo M-10-14 for reports from the Federal Audit Clearinghouse for fiscal year 2009. The reports were developed for entities where the auditor issued a qualified, adverse, or disclaimer audit opinion. The reports identified items such as (1) significant risks to the respective program that was audited; (2) material weaknesses, instances of noncompliance, and audit findings that put the program at risk; (3) actions taken by the agency; and (4) actions planned by the agency. OMB officials have stated that they plan to use this information to identify trends that may require clarification or additional guidance in the Compliance Supplement. OMB officials also stated that they are working on a metrics project with the Recovery Accountability and Transparency Board to develop metrics for determining how federal awarding agencies are to use information available in the Single Audit and which can serve as performance measures. We attended a presentation of the OMB Workgroup that is working with the Recovery Accountability and Transparency Board in developing the metrics project in May 2011 and note that it is making progress. OMB officials have stated that the metrics could be applied at the agency level, by program, to allow for analysis of Single Audit findings, along with other uses to be determined. One goal of the metrics project is to increase the effectiveness and timeliness of federal awarding agencies’ actions to resolve single audit findings. We will continue to monitor the progress of these efforts to determine the extent that they improve the timeliness of federal agencies’ actions to resolve audit findings so that risks to Recovery Act funds are reduced and internal controls in Recovery Act programs are strengthened. To ensure that Congress and the public have accurate information on the extent to which the goals of the Recovery Act are being met, we recommended that the Secretary of Transportation direct FHWA to take the following two actions: Develop additional rules and data checks in the Recovery Act Data System, so that these data will accurately identify contract milestones such as award dates and amounts, and provide guidance to states to revise existing contract data. Make publicly available—within 60 days after the September 30, 2010, obligation deadline—an accurate accounting and analysis of the extent to which states directed funds to economically distressed areas, including corrections to the data initially provided to Congress in December 2009. In its response, DOT stated that it implemented measures to further improve data quality in the Recovery Act Data System, including additional data quality checks, as well as providing states with additional training and guidance to improve the quality of data entered into the system. DOT also stated that as part of its efforts to respond to our draft September 2010 report in which we made this recommendation on economically distressed areas, it completed a comprehensive review of projects in these areas, which it provided to GAO for that report. DOT recently posted an accounting of the extent to which states directed Recovery Act transportation funds to projects located in economically distressed areas on its Web site, and we are in the process of assessing these data. To better understand the impact of Recovery Act investments in transportation, we believe that the Secretary of Transportation should ensure that the results of these projects are assessed and a determination made about whether these investments produced long-term benefits. Specifically, in the near term, we recommended that the Secretary direct FHWA and FTA to determine the types of data and performance measures they would need to assess the impact of the Recovery Act and the specific authority they may need to collect data and report on these measures. In its response, DOT noted that it expected to be able to report on Recovery Act outputs, such as the miles of road paved, bridges repaired, and transit vehicles purchased, but not on outcomes, such as reductions in travel time, nor did it commit to assessing whether transportation investments produced long-term benefits. DOT further explained that limitations in its data systems, coupled with the magnitude of Recovery Act funds relative to overall annual federal investment in transportation, would make assessing the benefits of Recovery Act funds difficult. DOT indicated that, with these limitations in mind, it is examining its existing data availability and, as necessary, would seek additional data collection authority from Congress if it became apparent that such authority was needed. DOT plans to take some steps to assess its data needs, but it has not committed to assessing the long-term benefits of Recovery Act investments in transportation infrastructure. We are therefore keeping our recommendation on this matter open. To the extent that appropriate adjustments to the Single Audit process are not accomplished under the current Single Audit structure, Congress should consider amending the Single Audit Act or enacting new legislation that provides for more timely internal control reporting, as well as audit coverage for smaller Recovery Act programs with high risk. We continue to believe that Congress should consider changes related to the Single Audit process. To the extent that additional coverage is needed to achieve accountability over Recovery Act programs, Congress should consider mechanisms to provide additional resources to support those charged with carrying out the Single Audit Act and related audits. We continue to believe that Congress should consider changes related to the Single Audit process. To provide housing finance agencies (HFA) with greater tools for enforcing program compliance, in the event the Section 1602 Program is extended for another year, Congress may want to consider directing the Department of the Treasury to permit HFAs the flexibility to disburse Section 1602 Program funds as interest-bearing loans that allow for repayment. We continue to believe that Congress should consider directing the Department of the Treasury to permit HFAs the flexibility to disburse Section 1602 Program funds as interest-bearing loans that allow for repayment. In addition to the individual named above, Susan Iott, Assistant Director; Tom Beall; Jillian Fasching; Sharon Hogan; Susan Iott; Thomas James; Yvonne Jones; Jonathan Kucskar; Kirsten Lauber; Carol Patey; Cheryl Peterson; Brenda Rabinowitz; Beverly Ross; Kelly Rubin; Carol Herrnstadt Shulman; Dawn Shorey; Kathryn Smith; Jonathan Stehle; Kiki Theodoropoulos; and Ethan Wozniak made key contributions to this report.
The American Recovery and Reinvestment Act of 2009 (Recovery Act) provided $4 billion for the Environmental Protection Agency's (EPA) Clean Water State Revolving Fund (SRF) and $2 billion for the agency's Drinking Water SRF. The Recovery Act requires GAO to review funds made available under the act and comment on recipients' reports of jobs created and retained. These jobs are reported as full-time equivalent (FTE) positions on a Web site created for the Recovery Act on www.Recovery.gov . GAO examined the (1) status and use of Recovery Act SRF program funds nationwide and in nine states; (2) EPA and state actions to monitor the act's SRF program funds; (3) EPA and selected states' approaches to ensure data quality, including for jobs reported by recipients of the act's funds; and (4) challenges, if any, that states have faced in implementing the act's requirements. For this work, GAO, among other things, obtained and analyzed EPA nationwide data on the status of Recovery Act clean and drinking water funds and projects and information from a nonprobability sample of nine states that represent all but 1 of EPA's 10 regions. GAO also interviewed EPA and state officials on their experiences with the Recovery Act SRF program funds. The 50 states have awarded and obligated the almost $6 billion in Clean Water and Drinking Water SRF program funds provided under the Recovery Act, and EPA indicated that all 50 states met the act's requirement to award funds to projects under contract 1 year after the act's passage. States used the funds to support more than 3,000 water quality projects, and according to EPA data, the majority of the funds were used for sewage treatment infrastructure and drinking water treatment and distribution systems. Since the act was passed, states have drawn down almost 80 percent of the SRF program funds provided under the act. According to EPA data, states met the act's requirements that at least (1) 20 percent of the funds be used to support "green" projects and (2) 50 percent of the funds be provided as additional subsidies. In the nine states GAO reviewed, the act's funds paid for 419 infrastructure projects that helped address major water quality problems, but state officials said in some cases the act's requirements changed their priorities for ranking projects or the projects selected. In addition, although not required by the act, the nine states used about a quarter of the funds they received to pay for projects in economically disadvantaged communities, most in additional subsidies. EPA, states, and state or private auditors took actions to monitor Recovery Act SRF program funds. For example, EPA officials reviewed all 50 states' Recovery Act SRF programs at least once and found that states were largely complying with the act's requirements. Also, in part as a response to a GAO recommendation, in June 2010 EPA updated--and is largely following--its oversight plan, which describes monitoring actions for the SRF programs. Furthermore, state officials visited sites to monitor Recovery Act projects, as indicated in the plan, and found few problems. Officials at EPA and in the nine states have also regularly checked the quality of data on Recovery.gov and stated that the quality has remained relatively stable, although GAO identified minor inconsistencies in the FTE data that states reported. Overall, the 50 states reported that the Recovery Act SRF programs funded an increasing number of FTE positions for the quarter ending December 2009 through the quarter ending June 2010, from about 6,000 FTEs to 15,000 FTEs. As projects were completed and funds spent, these FTEs had declined to about 6,000 FTEs for the quarter ending March 2011. Some state officials GAO interviewed identified challenges in implementing the Recovery Act's Clean and Drinking Water SRF requirements for green projects and additional subsidies, both of which were continued with some variation, in the fiscal year 2010 and 2011 appropriations for the SRF programs. Officials in four states said achieving the green-funding goal was difficult, with one suggesting that the 20 percent target be changed. In addition, officials in two of the four states, as well as in two other states, noted that when monies are not repaid into revolving funds to generate future revenue for these funds, the SRF program purpose changes from primarily providing loans for investments in water infrastructure to providing grants. GAO is making no recommendations in this report, which was provided to EPA for its review and comment. EPA did not comment on the report.
FHWA provides funding to the states for roadway construction and improvement projects through various programs collectively known as the federal-aid highway program. Most highway program funds are distributed to the states through annual apportionments according to statutory formulas; once apportioned, these funds are generally available to each state for eligible projects. The responsibility for choosing projects to fund generally rests with state departments of transportation and local planning organizations. The states have considerable discretion in selecting specific highway projects and in determining how to allocate available federal funds among the various projects they have selected. For example, section 145 of title 23 of the United States Code describes the federal-aid highway program as a federally assisted state program and provides that the federal authorization of funds, as well as the availability of federal funds for expenditure, “shall in no way infringe on the sovereign right of the states to determine which projects shall be federally financed.” While FHWA approves state transportation plans, environmental impact assessments, and the acquisition of property for highway projects, its role in approving the design and construction of projects varies. Relatively few projects are subject to “full” oversight, in which FHWA prescribes design and construction standards, approves design plans and estimates, approves contract awards, inspects construction progress, and renders final acceptance on projects when they are completed. Under TEA-21, FHWA exercises full oversight only of certain high-cost Interstate system projects. For other federally assisted projects, there are two options. First, for a project that is not located on the Interstate system but is part of the National Highway System, a state may assume responsibility for overseeing the project’s design and construction unless the state or FHWA determines that this responsibility is not appropriate for the state. Second, for a project that is not part of the National Highway System, the state is required to assume responsibility for overseeing the project’s design and construction unless the state determines that this responsibility is not appropriate for it. Under both options, TEA-21 requires FHWA and each state to enter into an agreement documenting the types of projects for which the state will assume oversight responsibilities. A major highway or bridge construction or repair project usually has four stages: (1) planning, (2) environmental review, (3) design and property acquisition, and (4) construction. The state’s activities and FHWA’s corresponding approval actions are shown in figure 1. In TEA-21, Congress required states to submit annual finance plans to DOT for highway and bridge projects estimated to cost $1 billion or more. Congress further required each finance plan to be based on detailed estimates of the costs to complete the project and on reasonable assumptions about future increases in such costs. Our work has raised issues concerning the cost and oversight of major highway and bridge projects, including the following: Cost growth has occurred on many major highway and bridge projects. For example, on 23 of 30 projects initially expected to cost over $100 million, our 1997 report identified increases ranging from 2 to 211 percent—costs on about half these projects increased 25 percent or more. In addition, the DOT Inspector General has recently identified cost increases on major projects such as the Wilson Bridge, Springfield Interchange, and Central Artery/Tunnel projects. As I testified in 2002, reviews by state audit and evaluation agencies have also highlighted concerns about the cost and management of major highway and bridge programs. For example in January 2001, Virginia’s Joint Legislative Audit and Review Commission found that final project costs on Virginia Department of Transportation projects were well above their cost estimates and estimated that the state’s 6-year, $9 billion transportation development plan understated the costs of projects by up to $3.5 billion. The commission attributed these problems to several factors, including not adjusting estimates for inflation, expanding the scope of projects, not consistently including amounts for contingencies, and committing design errors. Although cost growth has occurred on many major highway and bridge projects, overall information on the amount of and reasons for cost increases on major projects is generally not available because neither FHWA nor state highway departments track this information over the life of projects. Congressional efforts to obtain such information have met with limited success. For example, in 2000 the former Chairman of this subcommittee asked FHWA to provide information on how many major federal-aid highway projects had experienced large cost overruns. Because FHWA lacked a management information system to track this information, officials manually reviewed records for over 1,500 projects authorized over a 4-year period. FHWA’s information, however, measured only the increases in costs that occurred after the projects were fully designed. Thus, cost increases that occurred during the design of a project—where we have reported that much of the cost growth occurs—were not reflected in FHWA’s data. In contrast to the federal-aid highway program, the Office of Management and Budget requires federal agencies, for acquisitions of major capital assets, to prepare baseline cost and schedule estimates and to track and report the acquisitions’ cost performance. These requirements apply to programs managed by and acquisitions made by federal agencies, but they do not apply to the federal-aid highway program, a federally assisted state program. While many factors can cause costs to increase, we have found, on projects we have reviewed, that costs increased, in part, because initial cost estimates were not reliable predictors of the total costs or financing needs of projects. Rather, these estimates were generally developed for the environmental review—whose purpose was to compare project alternatives, not to develop reliable cost estimates. In addition, each state used its own methods to develop its estimates, and the estimates included different types of costs, since FHWA had no standard requirements for preparing cost estimates. For example, one state we visited for our 1997 report included the costs of designing projects in its estimates, while two other states did not. We also found that costs increased on projects in the states we visited because (1) initial estimates were modified to reflect more detailed plans and specifications as projects were designed and (2) the projects’ costs were affected by, among other things, inflation and changes in scope to accommodate economic development over time. In 1997, we reported that cost containment was not an explicit statutory or regulatory goal of FHWA’s full oversight. On projects where FHWA exercised full oversight, it focused primarily on helping to ensure that the applicable safety and quality standards for the design and construction of highway projects were met. According to FHWA officials, controlling costs was not a goal of their oversight and FHWA had no mandate in law to encourage or require practices to contain the costs of major highway projects. While FHWA influenced the cost-effectiveness of projects when it reviewed and approved plans for their design and construction, we found it had done little to ensure that cost containment was an integral part of the states’ project management. Finally, we have noted that FHWA’s oversight and project approval process consists of a series of incremental actions that occur over the years required to plan, design, and build a project. In many instances, states construct a major project as a series of smaller projects, and FHWA approves the estimated cost of each smaller project when it is ready for construction, rather than agreeing to the total cost of the major project at the outset. In some instances, by the time FHWA approves the cost of a major project, a public investment decision may, in effect, already have been made because substantial funds have already been spent on designing the project and acquiring property, and many of the increases in the project’s estimated costs have already occurred. Since 1998, FHWA has taken a number of steps to improve the management and oversight of major projects. FHWA implemented TEA- 21’s requirement that states develop an annual finance plan for any highway or bridge project estimated to cost $1 billion or more. Specifically, FHWA developed guidance that requires state finance plans to include a total cost estimate for the project, adjusted for inflation and annually updated; estimates about future cost increases; a schedule for completing the project; a description of construction financing sources and revenues; a cash flow analysis; and a discussion of other factors, such as how the project will affect the rest of the state’s highway program. As of May 2003, FHWA had approved finance plans for 10 federal-aid highway projects and expected finance plans to be prepared for 5 additional projects at the conclusion of those projects’ environmental review phase. In addition, FHWA established a major projects team that currently tracks and reports each month on these 15 projects, and has assigned—or has requested funding to assign—a full-time manager to each project to provide oversight. These oversight managers are expected to monitor their project’s cost and schedule, meet periodically with project officials, assist in resolving issues and problems, and help to bring “lessons learned” on their projects to other federally assisted highway projects. As I testified in 2002, there are indications that the finance plan requirement has produced positive results. For example, in Massachusetts, projections of funding shortfalls identified in developing the Central Artery/Tunnel project’s finance plan helped motivate state officials to identify new sources of state financing and implement measures to ensure that funding was adequate to meet expenses for the project. However, some major corridor projects will not be covered by the requirement. FHWA has identified 22 corridor projects that will be built in “usable segments”—separate projects costing less than $1 billion each—and therefore will not require finance plans. According to FHWA officials, states plan these long-term projects in segments because it is very difficult for them to financially plan for projects extending many years into the future. Nevertheless, these major projects represent a large investment in highway infrastructure. For example, planned corridor projects that will not require finance plans total almost $5 billion in Arkansas, about $12.3 billion in Texas, about $5.3 billion in Virginia, and about $4.2 billion in West Virginia. In addition, the $1 billion threshold does not consider the impact of a major highway and bridge project on a state’s highway program. In Vermont, for instance, a $300 million project would represent a larger portion of the state’s federal highway program funding than a $1 billion dollar project would represent in California. In addition to implementing TEA-21’s requirements, FHWA convened a task force on the stewardship and oversight of federal-aid highway projects and, in June 2001, issued a policy memorandum to improve its oversight. The memorandum directed FHWA’s field offices to conduct risk assessments within their states to identify areas of weakness, set priorities for improvement, and work with the states to meet those priorities. Soon afterwards, FHWA convened a review team to examine its field offices’ activities, and in March 2003, it published an internal “best practices” guide to assist the field offices in conducting risk assessments. FHWA also began an effort during 2003 to identify strategies for assessing and managing risks and for allocating resources agencywide. FHWA’s policy memorandum further sought to address the task force’s conclusion that changes in the agency’s oversight role since 1991 had resulted in conflicting interpretations of the agency’s role in overseeing projects. The task force found that because many projects were classified as “exempt” from FHWA’s oversight, some of the field offices were taking a “hands off” approach to these projects. The policy stipulates that while states have responsibility for the design and construction of many projects, FHWA is ultimately accountable for the efficient and effective management of all projects financed with federal funds and for ensuring compliance with applicable laws, regulations, and policies. While FHWA has been moving forward to incorporate risk-based management into its oversight through the use of risk assessments, it has not yet developed goals or measurable outcomes linking its oversight activities to the business goals in its performance plan, nor has it developed a monitoring plan as its task force recommended in 2001. As I testified in May 2002, until FHWA takes these actions, it will be limited in its ability to judge the success of its efforts or to know whether the conflicting interpretations of its roles discussed above have been resolved. Finally, FHWA has taken actions to respond to a DOT task force report on the management and oversight of major projects. In December 2000, this task force concluded that a significant effort was needed to improve the oversight of major transportation projects—including highway and bridge projects. The task force made 24 recommendations, including recommendations to establish an executive council to oversee major projects, institute regular reporting requirements, and establish a professional cadre of project managers with required core competencies, training, and credentials. The task force’s recommendations were not formally implemented for several reasons, including turnover in key positions and the need to reevaluate policy following the change in administrations in January 2001, and higher priorities brought on by the events of September 11, 2001. However, FHWA believes it has been responsive to the task force’s recommendations by establishing a major projects oversight team, designating an oversight manager for each project, and, most recently, developing and publishing core competencies for managers overseeing major projects. In addition, 7 of the task force’s 24 recommendations would have required legislation. For example, the task force recommended establishing a separate funding category for preliminary engineering and design—those activities that generally accomplish the first 20 to 35 percent of a project’s design. The task force concluded that a separate funding category would allow a new decision point to be established. Initial design work could proceed far enough so that a higher-quality, more reliable cost estimate would be available for decisionmakers to consider before deciding whether to complete the design and construction of a major project—and before a substantial federal investment had already been made. In my testimony of May 2002, I presented options for enhancing FHWA’s role in overseeing the costs of major highway and bridge projects, should Congress, in reauthorizing TEA-21, determine that such action is needed and appropriate. Each of these options would be difficult and possibly costly; each represents a commitment of additional resources that must be weighed against the option’s potential benefits. Adopting any of these options would require Congress to determine the appropriate federal role—balancing the states’ sovereign right to select its projects and desire for flexibility and more autonomy with the federal government’s interest in ensuring that billions of federal dollars are spent efficiently and effectively. These options include the following: Have FHWA develop and maintain a management information system on the cost performance of selected major highway and bridge projects, including changes in estimated costs over time and the reasons for such changes. While Congress has expressed concern about cost growth on major projects, it has had little success obtaining timely, complete, and accurate information about the extent of and the reasons for this cost growth on projects. Such information could help define the scope of the problem with major projects and provide insights needed to fashion appropriate solutions. Improve the quality of initial cost estimates by having states develop—and having FHWA assist the states in developing—more uniform and reliable total cost estimates at an appropriate time early in the development of major projects. This option could help policymakers understand the extent of the proposed federal, state, and local investment in these projects, serve as a baseline for measuring cost performance over time, and assist program managers in reliably estimating financing requirements. Have states track the progress of projects against their initial baseline cost estimates. Expanding the federal government’s practice of having its own agencies track the progress of the acquisition of major capital assets against baseline estimates to the federally assisted highway program could enhance accountability and potentially improve the management of major projects by providing managers with real-time information for identifying problems early, and for making decisions about project changes that could affect costs. Tracking progress could also help identify common problems and provide a better basis for estimating costs in the future. Establish performance goals for containing costs and implement strategies for doing so as projects move through their design and construction phases. Such performance goals could provide financial or other incentives to the states for meeting agreed-upon goals. Performance provisions such as these have been established in other federally assisted grant programs and have also been proposed for use in the federal-aid highway program. Requiring or encouraging the use of goals and strategies could also improve accountability and make cost containment an integral part of how states manage projects over time. Expand FHWA’s finance plan requirement to other projects. While Congress has decided that enhanced federal oversight of the costs and funding of projects estimated to cost over $1 billion is important, projects of importance for reasons other than cost may not, as discussed earlier, receive such oversight. Should Congress believe such an action would be beneficial, additional criteria for defining projects would need to be incorporated into FHWA’s structure for overseeing the costs and financing of major projects. Clarify FHWA’s role in overseeing and reviewing the costs and management of major projects. Changes in FHWA’s oversight role since 1991 have created conflicting interpretations about FHWA’s role, and our work has found that FHWA questions its authority to encourage or require practices to contain the costs of major highway projects. Should uncertainties about FHWA’s role and authority continue, another option would be to resolve the uncertainties through reauthorization language. Establish a process for the federal approval of major projects. This option, which would require federal approval of a major project at the outset, including its cost estimate and finance plan, would be the most far- reaching and the most difficult option to implement. Potential models for such a process include the full funding grant agreement process that the Federal Transit Administration uses for major transit projects, and the DOT task force’s December 2000 recommendation calling for the establishment of a separate funding category for initial design work and a new decision point for advancing projects. Establishing such a federal approval process could have the potential to improve the reliability of the initial baseline estimates and the cost performance of major projects over time. For further information on this statement, please contact JayEtta Z. Hecker ([email protected]) or Steve Cohen ([email protected]). Alternatively, they may be reached at (202) 512-2834. Transportation Infrastructure: Cost and Oversight Issues on Major Highway and Bridge Projects. GAO-02-702T. Washington, D.C.: May 1, 2002. Surface Infrastructure: Costs, Financing, and Schedules for Large-Dollar Transportation Projects. GAO/RCED-98-64. Washington, D.C.: February 12, 1998. DOT’s Budget: Management and Performance Issues Facing the Department in Fiscal Year 1999. GAO/T-RCED/AIMD-98-76. Washington, D.C.: February 12, 1998. Transportation Infrastructure: Managing the Costs of Large-Dollar Highway Projects. GAO/RCED-97-27. Washington, D.C.: February 27, 1997. Transportation Infrastructure: Progress on and Challenges to Central Artery/Tunnel Project’s Costs and Financing. GAO/RCED-97-170. Washington, D.C.: July 17, 1997. Transportation Infrastructure: Central Artery/Tunnel Project Faces Financial Uncertainties. GAO/RCED-96-1313. Washington, D.C.: May 10, 1996. Central Artery/Tunnel Project. GAO/RCED-95-213R. Washington, D.C.: June 2, 1995. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Improving the oversight and controlling the costs of major highway and bridge projects is important for the federal government, which often pays 80 percent of these projects' costs. Widespread consensus exists on the need to fund such projects, given the doubling of freight traffic and worsening congestion projected over the next 20 years, yet growing competition for limited federal and state funding dictates that major projects be managed efficiently and cost effectively. The Federal Highway Administration (FHWA) provides funding to the states for highway and bridge projects through the federal-aid highway program. This funding is apportioned to the states, and state departments of transportation choose eligible projects for funding. FHWA provides oversight to varying degrees, and, under the Transportation Equity Act for the 21st Century (TEA-21), FHWA and each state enter into an agreement documenting the types of projects the state will oversee. This statement for the record summarizes cost and oversight issues raised in reports and testimonies GAO has issued since 1995 on major highway and bridge projects and describes options that GAO has identified to enhance federal oversight of these projects, should Congress determine that such action is needed and appropriate. GAO and others have reported that cost growth has occurred on major highway and bridge projects; however, overall information on the amount of and reasons for cost increases is generally not available because neither FHWA nor state highway departments track this information for entire projects. GAO has found that costs grow, in part, because initial cost estimates, which are generally developed to compare project alternatives during a required environmental review phase, are not reliable predictors of projects' total costs. In addition, FHWA approves the estimated costs of major projects in phases, rather than agreeing to the total costs at the outset. By the time FHWA approves the total cost of a major project, a public investment decision might, in effect, already have been made because substantial funds could already have been spent on designing the project and acquiring property. FHWA's implementation of a TEA-21 requirement that states develop annual finance plans for major projects estimated to cost $1 billion or more has improved the oversight of some major projects, and FHWA is incorporating more risk assessment in its day-to-day oversight activities. Should Congress determine that enhancing federal oversight of major highway and bridge projects is needed and appropriate, GAO has identified options, including improving information on the cost performance of selected major projects, improving the quality of initial cost estimates, and enhancing and clarifying FHWA's role in reviewing and approving major projects. Adopting any of these options would require balancing the states' sovereign right to select projects and desire for flexibility and more autonomy with the federal government's interest in ensuring that billions of federal dollars are spent efficiently and effectively. In addition, the additional costs of each of these options would need to be weighed against its potential benefits.
The Arms Export Control Act authorizes the President to control the export and import of defense articles and defense services. The statutory authority of the President to promulgate regulations with respect to exports of defense articles and defense services and designate those items to be considered defense articles and defense services for export control purposes has been delegated to the Secretary of State. State administers the arms export control system through requirements contained in the International Traffic in Arms Regulations (ITAR) and designates the articles and services deemed to be defense articles and defense services. These designations are made by State, with the concurrence of DOD, and constitute the United States Munitions List (USML), which comprises 21 major categories—for example Aircraft, Spacecraft, Military Electronics, and Guns and Armament—and more detailed subcategories. The ITAR also designates defense services subject to export controls, including furnishing assistance, technical data, or training to foreign entities. As defense exports are part of U.S. foreign policy, Congress requires reports to enable its oversight, including annual reports under the Foreign Assistance Act of 1961, as amended, Section 655 on defense exports, commonly referred to as Section 655 reports. U.S. defense articles and services generally can be exported to foreign entities in two ways—by FMS or DCS. Under FMS, the U.S. government procures defense articles and services on behalf of the foreign entity. Countries approved to participate in this program may obtain defense articles and services by paying with their own funds or with funds provided through U.S. government-sponsored assistance programs. While State has overall regulatory responsibility for the FMS program and approves the export of defense articles and services, DOD’s DSCA directs the execution of the program, and the individual military departments implement the sale and export process. DOD bills foreign entities and tracks the export of articles and services through its financial systems. For FMS, an approved Letter of Offer and Acceptance authorizes the export. Under DCS, U.S. companies obtain permanent export licenses generally valid for 4 years from State’s DDTC, which authorizes the export of defense articles and services directly to foreign entities. State also licenses defense articles for temporary export—when the article will be exported for a period of less than 4 years and will be returned to the United States without transfer of title. While most defense articles and services require a license for export, the ITAR contains numerous exemptions from licensing requirements that have defined conditions and limitations. For both FMS and DCS, the actual export of defense articles or services may occur years after the authorization—or may not take place at all. In addition to State and DOD, other U.S. government entities are involved with oversight of defense exports and management of export data. CBP oversees exports of defense articles leaving the country for compliance with export control laws and regulations and collects information on those exports through AES. AES is jointly managed and operated by CBP and Census, and the data it collects are used by State and other federal agencies. It is the central point through which export data required by multiple agencies are filed electronically to CBP. Foreign Trade Regulations and the ITAR require AES filings for all articles on the USML that are sent, taken, or transported out of the United States, and the exporter must provide either a license number or a citation of the license exemption. The data obtained through AES are maintained by Census’s Foreign Trade Division and CBP for the purpose of developing merchandise trade statistics and enforcement of U.S. export control laws, but also are provided to State for reporting purposes. DCSA information on the FMS program identifies several considerations for foreign entities in choosing between FMS and DCS. Under FMS, DOD procures defense articles and services for the foreign entity under the same acquisition process used for its own military needs, and recipients may benefit from economies of scale achieved through combining FMS purchases with DOD’s. In addition, DOD provides contract administration services that may not be available through the private sector. To recover its administration costs, DOD applies a surcharge to each FMS agreement that is a percentage of the value of each sale. Under DCS, foreign entities may have more direct involvement during contract negotiation with U.S. defense companies, may obtain firm-fixed pricing, and may be better able to fulfill nonstandard requirements. However, according to State officials, some types of defense articles, such as certain types of missiles, can only be exported through FMS. In addition, DOD administers other programs through which defense articles can be exported to foreign governments. For example, the fiscal year 2006 National Defense Authorization Act provides funding authorities for DOD to jointly formulate and coordinate with State in the implementation of security assistance programs, which can include the export of U.S. defense articles and services. DOD also may export certain defense articles deemed “excess” to our national security needs to foreign governments or international organizations on a reduced or no-cost basis. From calendar years 2005 through 2008, the value of U.S. exports of defense articles remained relatively stable, from about $19 billion and $20 billion, with an increase to about $22 billion in 2009. Of the approximately $101 billion total in U.S. defense articles exported from 2005 through 2009, about 60 percent were exported through DCS, as shown in figure 1. This figure also shows that exports through DCS increased from $10.6 billion to $13.3 billion during this period—an increase of about 25 percent—while the value of FMS exports remained relatively stable. Although there are currently no data available on the export of defense services through DCS, we found that the value of defense services exported through FMS was also relatively stable over the last 5 calendar years, ranging from about $3.8 billion to $4.2 billion annually from 2005 through 2009. Overall, services account for about one-third of the value of all FMS exports annually. Over the last 5 years, aircraft and their related parts and equipment accounted for about 44 percent of the value of all defense articles exported. The second largest category was satellites, communications, and electronics equipment and their related parts—accounting for about 20 percent of defense articles. We also found differences in the method of export for defense articles, with values for some types of articles higher through FMS versus DCS and vice versa. As shown in figure 2, of the approximately $26 billion in aircraft equipment and parts exported over the 5-year period, almost 66 percent (about $17.2 billion) was exported through DCS. A much larger value of other equipment and parts; satellites, communications and electronics equipment, and related parts; and firearms were also exported through DCS. On the other hand, a larger value of missiles, ships, and their related parts were exported through FMS. For two categories—aircraft and vehicles, weapons, and their parts—export values were about evenly divided between DCS and FMS. Although defense articles and services are exported to hundreds of countries, we found that exports of defense articles were highly concentrated in a few countries. Over the past 5 years, the top three recipient countries—Japan, the United Kingdom, and Israel—accounted for almost one-third of the value of defense article exports. The top seven recipient countries, which include South Korea, Australia, Egypt, and the United Arab Emirates, accounted for about half of the value of all U.S. defense article exports. We also identified differences by the method of export through either FMS or DCS. In general, the value of FMS exports was higher for developing countries, while the value of DCS exports was higher for developed countries. State officials noted that developing countries may benefit from the FMS logistics, infrastructure, and other support that come with the FMS program. As shown in figure 3, of the $13 billion in defense articles that Japan imported, 85 percent ($11.15 billion) was exported through DCS. Similarly, of the $8.3 billion that the United Kingdom imported, 82 percent (about $6.8 billion) was exported through DCS. On the other hand, Israel and Egypt import a higher value of their U.S. defense articles through the FMS program. Israel and Egypt receive annual U.S. security assistance funding that according to DOD and State officials, generally is used to purchase U.S. defense articles and services through the FMS program. FMS exports of defense services were also concentrated in a relatively few countries, with Saudi Arabia, Japan, and Egypt accounting for over one-third of the value over the last 5 years. Although Congress requires reporting on various aspects of U.S. defense exports, State’s and DOD’s annual reports on “military assistance and military exports”—as required by Section 655 of the Foreign Assistance Act of 1961, as amended—do not provide a complete picture of the magnitude and nature of defense exports because the agencies use different reporting methodologies and have information inconsistencies and gaps—in part, because of the separate purposes of their data systems. Although the data we obtained and analyzed were sufficiently reliable to develop high-level, overall information on the magnitude and nature of defense exports, the differences in agencies’ data—including the lack of information for defense services exported under DCS licenses, differences in agencies’ item and country categorizations, and the inability to separate some permanent and temporary exports—hinder the ability to provide a comprehensive and transparent picture of defense exports. Current export reform discussions acknowledge that the proliferation of individual data systems make export licensing and enforcement more difficult; however, the FMS system has not been specifically cited in these proposals. Because defense exports are used for furthering U.S. foreign policy objectives, there are legislatively mandated reporting requirements to enable congressional oversight. State has overall responsibility to report on exports of defense articles and defense services. DOD also reports on defense exports under FMS and other programs. The most comprehensive reporting requirement is contained in Section 655 of the Foreign Assistance Act of 1961, as amended, which requires annual reporting of defense articles and services that were authorized and provided (exported) to each foreign country and international organization for the previous fiscal year under State export license or furnished under FMS, including those furnished with the financial assistance of the U.S. government. Also, for defense articles licensed for export by State, the act requires “a specification of those defense articles that were exported during the fiscal year covered by the report.” There is not a parallel provision for a specification of defense services exported under licenses issued by State. In addition, the act requires that unclassified portions of the report be made public on the Internet through State. Although State publishes its Section 655 reports on its Web site, DOD’s Section 655 reports are not available either through DOD’s or State’s Web site. Other reporting requirements are focused on discrete aspects of defense exports and, as such, are not intended to provide a complete picture of such exports. For example, Section 36 of the Arms Export Control Act requires advance notifications to Congress for proposed sales based on certain dollar thresholds, as well as reports on defense exports sold. DOD also noted numerous additional reporting requirements for defense exports that occur under other programs, such as Excess Defense Articles and International Military Education and Training. While State and DOD each provide annual reports to Congress in response to the Section 655 requirement, we identified differences in the way each agency reports its data—in some cases based on differing interpretations of the same requirement—that lead to an incomplete overall picture of the magnitude and nature of such exports, as shown in table 1. The differences in reporting also occur because the data on defense exports are gathered and maintained by multiple government agencies for a variety of purposes using different data systems. State and DOD officials told us that information reported on defense exports is based on data that are contained in existing systems developed to satisfy the operational requirements of each organization and was not designed to integrate with other agencies’ systems. For example, State’s system was designed to manage the DCS licensing process, DSCA’s system was developed to facilitate the management of the FMS program, and data collected in the AES system are maintained by Census primarily for generating trade statistics. Nonetheless, these systems are the principal sources of information on defense exports. In areas where these systems differ from each other, certain data fields need to be reconciled before data can be aggregated. Even with these adjustments, these and other system differences hinder the ability to perform a more detailed and in-depth analysis of defense exports. For example, one difference between State’s and DOD’s reporting is the lack of data on defense services exported under DCS licenses. According to State’s reporting to Congress, for fiscal year 2005, it licensed over $27 billion in defense services. By fiscal year 2008, the most recent data available, the value of approved licenses for defense services almost tripled to over $71 billion. However, State does not report on the value of defense services exported under license authorizations because it does not have such information. This is in part because AES does not capture data on the export of services to foreign entities as it was developed to track information on the export of physical articles. Also, State officials noted that they have no operational requirement to have information on the value of exported defense services, and they do not require such information to be reported to State as it could create an additional burden on exporters. Further, these officials noted that they have not received feedback from congressional committees on the lack of such data in prior reports to Congress and therefore are not planning to obtain these data from exporters. In contrast, because DOD bills FMS customers for the export of defense services—including logistical support, repairs, training, and technical assistance—it tracks data on the value of services exported. As noted earlier in this report, defense services constitute about one-third of annual FMS exports. Further complicating efforts to combine and compare State and DOD data reported in the Section 655 reports is that agencies involved in the licensing, export, and collection of related data lack a unified item categorization scheme. According to agency officials, these item categorization schemes were developed for their specific purposes and were not designed to integrate with other agencies’ data for reporting defense exports. In issuing DCS licenses, State uses the categories for defense articles and services enumerated on the USML and reports license values by USML categories and subcategories. However, when exporters file their export information through AES for these licensed exports, they include the USML category that provides a high-level categorization of articles (e.g., “Aircraft and Associated Equipment”) but does not allow for the more detailed breakout of articles by subcategories, which State uses to report license values. Exporters also categorize articles according the Harmonized Tariff Schedule, based on the international “Harmonized System,” which was developed for reporting merchandise trade statistics. The Harmonized System and USML are not directly comparable. For example, while the USML has a category for “tanks and military vehicles” separate from other categories for weapons, the Harmonized System has one combined category that includes both weapons and “weaponized” vehicles such as tanks and armored vehicles. As a result, a more detailed combined analysis of the types of military vehicles is not possible using existing category schemes. Under the FMS program, DOD reports export values based on information used to bill foreign entities using a unique item categorization system that also is not directly comparable to the USML. For example, the USML has separate categories for explosives, bombs, training equipment, and guidance equipment; DOD’s single category for “bomb” includes items in all of those USML categories. Further, some of the articles and services exported through the FMS program, such as fuel and construction, are not controlled under the USML. However, since DOD bills foreign entities for these articles and services, they are included in DOD’s reports along with defense articles and services. DOD officials noted that there is no requirement to report exports by USML categories. Defense export data comparisons also are limited because DOD, Census, and State define some countries and international organizations differently. For example, DOD’s FMS data and State export license authorizations include exports to international organizations such as the United Nations. Exports documented through AES are coded for the country of destination and not for international organizations that may be located within those countries. Furthermore, each agency’s system uses different codes for some countries, requiring manual analysis to enable combining and comparing of these data. For example, the code used for a country in one database may be used for a different country in another database, and some country names are different. These differences hamper efforts to make comparisons between the systems or to combine the databases to analyze like exports to countries and international organizations. Another difference between State’s and DOD’s Section 655 reports is State’s inclusion of U.S. government end users in its data. While all exports under FMS are to foreign entities, State reports license authorization values for exports that are used by U.S. government agencies within the recipient country as well as articles exported for use by U.S. and allied forces operating on foreign soil. Because the values reported for exports of defense articles include these U.S. government end users, the value of such articles exported to foreign entities is overstated. In addition, obtaining precise data on DCS exports is further limited for certain types of exports where permanent and temporary exports are grouped together. For example, both temporary and permanent exports of classified items are identified under a single export license type. For 2005 through 2009, this license type included a total of about $7 billion in exports, which can include temporary exports. In addition, the ITAR provides for a license exemption for some defense articles exported to Canada. However, the ITAR provides a single Canadian exemption that includes both permanent and temporary exports. As noted earlier, defense export data for Canada are likely understated since the data do not delineate permanent exports from temporary ones in the approximately $4.1 billion reported under this exemption from 2005 through 2009. DOD’s reporting of total defense exports is also limited by the lack of data on exports of defense articles and services under certain U.S. government- funded programs. For example, until recently DSCA did not have access to centralized data on defense exports authorized under sections 1206 and 1207 of the National Defense Authorization Act for Fiscal Year 2006. Such exports are tracked separately from FMS cases—generally by the appropriation that funded the export. In 2009, the DSCA system identified a cumulative total for these exports that included multiple years with no way to separate the data by the year of export. However, DSCA officials told us that they now receive monthly updates on these exports and are considering options for including these data in future reporting. Furthermore, officials at Census, CBP, and DOD told us that reporting through AES for FMS exports is not complete, although the U.S. Foreign Trade Regulations and the ITAR require AES filings for all USML items exported from the United States including those exported through FMS. DOD officials noted that while AES filing is required, not all DOD components fully comply. Census officials stated that they are providing outreach and training for DOD components to encourage compliance with this requirement. CBP officials noted that reporting of FMS exports through AES has improved over the years, and our analysis of AES data showed that the value of FMS exports reported in AES has increased from 2005 to 2009. Under the U.S. export control reform effort currently under way, the administration has noted that the myriad of U.S. government agencies involved in export controls continue to maintain separate information technology systems and databases that are not accessible or easily compatible with each other. According to a recent statement by the U.S. National Security Advisor, this proliferation of individual systems makes export licensing and enforcement more difficult. In our High-Risk Series, we found weaknesses in the effectiveness and efficiency of U.S. government programs that are related to the protection of technologies critical to national security interests, such as FMS and DCS, and recommended that these programs be reexamined to determine how they can collectively achieve their missions. The U.S. government is currently considering consolidating the current export control lists and adopting a single multiagency system for licensing with a single interface for exporters, ultimately leading to a single enterprisewide information technology system that can track an export from the filing of a license application until the item leaves a U.S. port. However, the administration has not announced plans on how defense articles and services authorized and exported under FMS and other government-to-government programs will be incorporated into a reformed U.S. export control system. A complete picture of defense exports—including which method of export is used more often by individual countries or for certain types of items—is not available under current reporting to Congress. Although State has overall responsibility to regulate the export of defense articles and services, it reports separately from DOD on some aspects of defense exports. Information from DOD and State cannot be readily combined to provide a complete picture of defense exports. Gaps and limitations in these data—including the lack of information on defense services exported under DCS, which could be substantial given the high dollar value of such services authorized by State—may inhibit congressional oversight and transparency into the entirety of U.S. defense exports. For example, Congress does not have complete data to determine whether specific U.S. foreign policy objectives are being furthered through the various export programs. While State has noted a potential burden for exporters if they were required to report on exports of defense services under DCS, there may be value to Congress in having such information, especially in light of the large and growing value of license authorizations for defense services. As U.S. export control reform efforts move beyond the initial phase of revising and consolidating control lists, it will be important to consider ways to standardize and integrate data across agencies to mitigate the gaps and limitations noted in this report. Recognizing that complete integration and standardization across agencies’ data systems is a long- term effort that may require additional resources, State could improve overall reporting of defense exports under the constraints of current data systems by using a methodology similar to ours to enhance congressional oversight and transparency of such exports. Also, as policymakers develop and debate export control reform proposals, it is important to consider whether other programs related to the protection of technologies critical to U.S. national security, such as the FMS program, should be included in the reform efforts. In order to obtain a more complete picture of defense exports, Congress should consider whether it needs specific data on exported defense services similar to what it currently receives on defense articles and, if so, request that State provide such data as appropriate. To improve transparency and consistency of reporting on defense exports required by the Foreign Assistance Act, we recommend that the Secretary of State direct the Directorate of Defense Trade Controls to coordinate with the Departments of Defense and Commerce to identify and obtain relevant defense export information under existing agency data systems and provide a consolidated report to Congress on DCS and FMS that specifies articles exported using a common category system; separates U.S. government end users from foreign entities; separates permanent and temporary exports; incorporates all defense exports, including U.S. government-funded programs; and is made public through the Internet. We provided a draft of this report to the Departments of State, Homeland Security, and Defense and to Census under the Department of Commerce for their review and comment. Census and the Department of Homeland Security provided technical comments, which we incorporated as appropriate, and DOD did not comment on our draft. State provided written comments that are reprinted in appendix II. In commenting on the draft, State acknowledged the importance of maintaining and reporting to Congress and the public reliable data on U.S. defense exports through FMS and DCS, and notes that gaps and inconsistencies in current reporting are caused by differences in accounting by agencies for transfers of defense exports. However, State did not agree with our recommendation to report consolidated defense export data on FMS and DCS in a consistent manner. State reiterated that Congress has not requested any change to the substance of its current reporting, and State does not believe that the added resources necessary to change reporting formats are merited. However, based on our work and analysis of defense export data, we believe that congressional oversight and transparency into the entirety of U.S. defense exports could be improved with existing data and systems by utilizing more consistent reporting methodologies similar to those that we developed. State also noted that providing consolidated defense export data to Congress and the public was consistent with the goals of current export control reform efforts and encouraged Congress to provide criteria and the resources to develop appropriate information. We agree that ongoing export control reform efforts may provide opportunities to improve information and reporting, but recognizing that reforms may take years to implement, we believe that congressional oversight and transparency can be improved in the short term by implementing our recommendation. We are sending copies of this report to interested congressional committees, the Secretary of State, the Secretary of Defense, the Secretary of Commerce, and the Secretary of Homeland Security. This report also is available at no charge on the GAO Web site at http://www.gao.gov. If you have questions about this report or need additional information, please contact me at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. To identify information on the magnitude and nature of defense exports, we obtained data for calendar years 2005 through 2009 on direct commercial sales (DCS) from the U.S. Census Bureau’s (Census) Automated Export System (AES) and on the Foreign Military Sales (FMS) program from the Department of Defense’s (DOD) Defense Security Cooperation Agency (DSCA). For the purpose of this report, we defined “defense exports” as articles permanently exported under a Department of State (State) license to foreign end users. As such, we did not include temporary exports that return to the United States without transfer of ownership, shipments to U.S. government end users as identified in AES by the export information code, or articles exported under a license exemption. For DCS, we obtained a data extract from Census for AES records for this time frame of electronic information filings designated with a State “license type,” a required field for all exports covered by the United States Munitions List (USML). State has several different license types that generally identify the nature of the export or import, including permanent exports, temporary exports, temporary imports, agreements, articles exported with an exemption, or articles exported through the FMS program. For FMS data, although Foreign Trade Regulations and the International Traffic in Arms Regulations require AES filings for all articles on the USML, including those exported via FMS, we were told by both U.S. Customs and Border Protection (CBP) and Census officials that AES filings for DOD exports of FMS articles are not complete. Therefore, we could not use AES as a single data source for exports of defense articles. For this reason, we obtained data from DSCA for FMS exports for the same time frame from DSCA’s 1200 Delivery Subsystem. We did not include articles exported under Section 1206 or 1207 programs under the National Defense Authorization Act for Fiscal Year 2006. As noted, DSCA did not obtain export data on Section 1206 and 1207 exports until 2009. We also did not include data on DOD’s excess defense article program. Although most of our analysis focuses on exports of defense articles, we obtained data from State on DCS licenses that were in effect during 2005 through 2009, primarily for the purpose of assessing the reliability of AES data for these exports. For each of these three data sets, we also obtained the relevant reference tables and documentation from each agency. These reference tables translate the codes used in the databases—such as those for country name or commodity/item type—into their names or descriptions. We also reviewed relevant laws and regulations regarding the export of defense articles and requirements for reporting export information through AES. In order to combine and compare information from FMS and AES on the types of articles exported, we analyzed the item categorization systems used by each system to identify areas of commonality. We determined that the broad categories used by DOD for grouping like items together could be adapted to accommodate the lowest level of detail identified between the two systems. This allowed us to develop relatively large categories, but precluded us from further refining the analysis by breaking these out into more detailed categories because some types of items were combined into one category in either of the two systems. To assess overall defense exports by country, we created a cross-reference table to enable us to relate the data for a specific country in one data set to information for that same country in the other data set. We also identified groupings of countries considered developed or developing according to the United Nations’ definition. We did not include data on classified exports for either FMS or DCS. DOD officials stated that classified data on FMS exports could not be used in an unclassified report, even if aggregated with other data. We obtained and reviewed classified data for FMS and determined that excluding the FMS classified data from our analysis would not materially affect the high-level trend analysis and other information we discuss in this report. For classified DCS exports, temporary and permanent exports are grouped together in one license type in AES, with no way to separate permanent from temporary exports. For trend information across the 5-year time frame, we adjusted for the effects of inflation by converting values to 2009 dollars. We assessed the reliability of these data by performing electronic testing; reviewing system documentation, including system edits and validations; comparing our data to published or other available information; and interviewing knowledgeable officials about data quality and reliability. For the purposes of our analyses, we determined that the data were sufficiently reliable. To assess information reported on U.S. defense exports, we reviewed relevant reporting requirements and reviewed State and DOD reports to Congress on various portions of the export process, including notification of potential sales, authorizations, and exports. Specifically, we reviewed the reporting requirements in the Foreign Assistance Act of 1961, as amended, Section 655, on foreign military assistance that requires an annual report on both defense articles and services authorized and provided/exported to foreign countries and international organizations. We then analyzed and compared the relevant reports that State and DOD annually submit to Congress, identifying differences in reporting methodologies between the reports, and identified where such information is available to the public. We also interviewed agency officials at State’s Directorate of Defense Trade Controls (DDTC) and DOD’s DSCA responsible for generating these reports to obtain information on methodologies and definitions used in their respective reports. To identify limitations and gaps in available defense export data, we reviewed information and available system documentation for the data systems at DSCA, DDTC, and Census and interviewed knowledgeable officials at these agencies regarding data system purposes and functionality. We also interviewed officials at CBP who manage the AES interface with exporters. We conducted this performance audit from February 2010 to September 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Department of State Comments on GAO Draft Report Reporting on Exported Articles and Services Needs to Be Improved (GAO-10-952, GAO Code 120862) Thank you for the opportunity to comment on your draft report entitled “DEFENSE EXPORTS: Reporting on Exported Articles and Services Needs to Be Improved.” The Department of State recognizes the importance of maintaining and reporting to the Congress and public reliable data on United States defense exports through direct commercial sales or the Foreign Military Sales program. The draft report identifies gaps and inconsistencies in reports of this nature by the Executive Branch. However, the State Department notes that gaps and inconsistencies in reporting are inherent in accounting for transfers of defense export across agencies. While Foreign Military sales may, for example, include items such as tanks and weaponry on the U.S. Munitions List under the jurisdiction of the Department of State, dual-use items under the licensing jurisdiction of Commerce will not be included in State reports. Likewise, the requirements of the Congress for reporting direct commercial sales and Foreign Military Sales are also different. The Department of State faithfully reports to Congress all data pertaining to exported articles and services that are within its jurisdiction to collect. To date, the Congress has expressed no desire to change the substance of our current reporting. The Department does not believe that devising additional reporting formats would merit the commitment or allocation of additional resources and therefore disagrees with the report's recommendations. Providing consolidated defense export data to Congress and the public is consistent with the goals of Export Control Reform and the Executive Branch task force evaluating proposals and recommendations associated with it. As decisions are made on Export Control Reform, the Department of State encourages the Congress to furnish criteria and resources to develop appropriate information technology platforms and reporting criteria of benefit to both the Congress and the public. In addition to the contact named above, John Neumann, Assistant Director; Marie Ahearn; Richard Brown; Sharron Candon; Julia Kennon; Roxanna Sun; Robert Swierczek; and Bradley Terry made key contributions to this report.
The U.S. government exports billions of dollars of defense articles and services annually to foreign entities, generally through direct commercial sales (DCS) from U.S. companies under licenses issued by the State Department (State) or through the Department of Defense (DOD) Foreign Military Sales (FMS) program. GAO has previously reported on weaknesses in the export control system. As requested, GAO (1) identified the magnitude and nature of defense articles and services exported and (2) assessed information currently reported on defense exports and any gaps and limitations in defense export data. To conduct this work, GAO analyzed export data from DOD for FMS and the Department of Commerce's U.S. Census Bureau (Census) for DCS for 2005 through 2009; reviewed relevant laws and regulations; assessed State and DOD reports on defense exports; reviewed agency data systems documentation; and interviewed officials from State, DOD, Homeland Security, and Census. U.S. exports of defense articles--such as military aircraft, firearms, and explosives--ranged from about $19 billion to $22 billion annually in calendar years 2005 to 2009. Of these defense articles, about 60 percent have been exported by companies to foreign entities through DCS licenses, while the remaining 40 percent were exported under the FMS program. Aircraft and related parts constitute the largest category of such exports--about 44 percent--followed by satellites, communications, and electronics equipment and their related parts. U.S. exports of defense articles were concentrated in a few countries: about half went to Japan, the United Kingdom, Israel, South Korea, Australia, Egypt, and the United Arab Emirates. Although no data are available on the export of defense services--such as technical assistance and training--provided through DCS, exports of defense services through FMS were stable, accounting for about one-third of the value of FMS exports. Congress does not have a complete picture of defense exports under current reporting--including which method of export is used more often by individual countries or for certain types of items. State--which has overall responsibility for regulating defense exports--and DOD, report to Congress in response to various requirements. However, their annual reports on DCS and FMS exports have several information gaps and inconsistencies--in part, because of the differing purposes of the agencies' data systems and different reporting methodologies. For example, State does not obtain data from U.S. companies on the export of defense services under DCS licenses, although it authorizes several billion dollars of such exports annually. State officials noted that they do not have an operational requirement to collect such information and doing so could be burdensome on exporters. Other limitations on defense export data include differences in agencies' item and country categorizations and the inability to separate data on some permanent and temporary exports. Further, while State's report is available on its Web site, DOD's is not. These differences and limitations may inhibit congressional oversight and transparency into the entirety of U.S. defense exports. GAO suggests that Congress consider whether it needs specific data on exported defense services and is recommending that State publicly report consolidated defense export data on DCS and FMS in a consistent manner. In the absence of additional direction and resources from Congress, State did not agree. GAO believes the recommendation remains valid.
OPM contracts with almost 400 health plans, including fee-for-service plans and health maintenance organizations, to operate the Federal Employees Health Benefits Program (FEHBP). The Blue Cross and Blue Shield Association’s plan is the largest, covering almost 42 percent of about 4 million FEHBP enrollees in 1994. The Association’s contract with PCS for retail prescription drug services began in 1993; its contract with Medco for mail order drug services began in 1987. In operating the retail drug program, PCS contracts with a network of pharmacies to provide the Association’s federal employee health plan prescriptions at discounted prices. In 1996, this network included 44,751 pharmacies, about 60 percent of which were chain drug stores; the remaining 40 percent were independently owned. In operating the mail order program, Medco provides the plan prescriptions also at discounted prices. Medco receives and dispenses prescriptions from pharmacies in Florida, New Jersey, Ohio, and Texas. Under its FEHBP contract, the Association must submit to OPM any proposal to change its federal employee health plan benefits. OPM reviews such proposals to assess their cost-effectiveness to the program and potential effect on the delivery of benefits to federal enrollees. In addition, the Association oversees the activities of Medco and PCS and must report to OPM any significant problems that could affect the delivery of benefits to enrollees, such as those Medco initially experienced in implementing the benefit change. The Association submitted its benefit change proposal to OPM on May 31, 1995, citing the need to control the Blue Cross and Blue Shield Service Benefit Plan’s rising prescription drug costs while maintaining quality service for enrollees. Between 1988 and 1995, the Association’s payments for the plan’s prescription drugs increased at an average annual rate of about 21 percent, compared with an average annual rate of about 12 percent for total benefit payments. Moreover, prescription drug payments have constituted an increasingly greater share of total benefit payments, rising from about 13 percent in 1988 to about 23 percent in 1995 (see fig. 1). These payment increases appear to result mainly from increases in the number of prescriptions per enrollee and the price of prescriptions. Before the benefit change, the approximately 800,000 people insured under the Association’s Standard Option Plan who also had Medicare part B coverage did not pay anything for prescription drugs purchased at network retail pharmacies or through the mail order program. These people must now pay 20 percent of the price of prescriptions purchased at network retail pharmacies. Copayments for retail prescriptions were already required of other enrollees and are similar to those required in several other federal employee health plans. Without the benefit change, the Association contended that it would have had to increase monthly premiums for all of its federal enrollees with Standard Option coverage. To review Medco’s strategy for managing the anticipated increase in prescriptions and calls about them, Association staff met with Medco representatives on August 24, 1995. According to Medco officials, they estimated the size and timing of the increase by relying primarily on their own claims experience in managing pharmacy benefits for about 50 million people as well as data from a comparable benefit change made by Massachusetts Blue Cross and Blue Shield. The resulting Medco forecast estimated a gradual 64-percent growth in 1996 mail order prescriptions. Using this data, Medco planned to gradually increase its capacity to handle prescriptions from about 110,000 a week during the last quarter of 1995 to 180,000 a week during the last quarter of 1996. Medco also planned to handle occasional surges in demand of up to 13 percent more than the forecasted number and increase its telephone capacity to respond to greater demand for customer service. More immediate growth in mail order prescriptions could have been expected from this cost-conscious group of enrollees, however, according to our actuarial consultant’s review of this forecast. OPM notified the Association that the benefit change had been approved in September 1995. Both OPM and Association officials contended that the change would promote more cost-effective use of the prescription drug benefit by encouraging enrollees to use the less expensive mail order program. According to the Association’s actuarial analysis, which included Medco savings estimates related to its contract, the benefit change would save the plan about $193 million in 1996. OPM’s actuarial analysis supported this estimated level of savings. Although these analyses did not include an audit of Medco’s estimates or related supporting documentation, our actuarial consultant’s review of the Association and OPM analyses indicated that the overall savings estimates were reasonable, though possibly understated. The number of prescriptions received by Medco quickly surpassed Medco and Association expectations. During the first week of January 1996, the number of prescriptions rose to 157,000, and during the week ending January 27, 1996, they reached 233,000—an amount about 66 percent greater than expected. By the week ending March 9, 1996, and continuing through the week ending April 6, 1996, the number of weekly prescriptions received ranged between 175,000 and 187,000. Enrollees with Medicare part B benefits accounted for most of the increase in prescriptions. About 9 percent of these enrollees’ prescriptions were purchased through the mail order program in 1995, a percentage that increased to about 38 percent by February 1996. Figure 2 shows the increase in mail order prescriptions contrasted with the number of forecasted prescriptions. Medco’s processing capacity could not absorb this rapid increase. The number of pharmacists was insufficient to handle prescription orders, and many enrollees did not get their prescriptions filled promptly. For example, although Medco’s contract requires that it dispense or return 99 percent of the prescriptions it receives daily within 5 business days, Medco reported that this performance measure was met about 87 percent of the time in January 1996 and about 94 percent of the time in February 1996. In addition, many customer calls were delayed or went unanswered during January and February 1996. Medco’s contract specifies that no more than 2 percent of customer calls a week receive a busy signal, known as call blockage. Although the call blockage rate averaged 1.8 percent a week for the 2-month period, about 8 percent, or 11,000 calls, received a busy signal during the week ending January 20, 1996. During the last week of January 1996, OPM informed Association officials of its disappointment with the customer service being provided to enrollees using the mail order program and indicated that corrective measures should be taken. Medco responded to the unanticipated demand and associated service problems by moving quickly to increase processing capacity. For example, during the week ending January 20, 1996, Medco officials expanded operations at the company’s Florida and New Jersey pharmacies from a 5-1/2-day schedule to 7 days a week, with operating hours expanded from 15 hours to 19 hours daily. Medco also reassigned pharmacists who normally performed other Medco jobs to confirm phone and fax prescription orders. Medco officials also brought pharmacists and support personnel from pharmacies across the country to one Tampa pharmacy to increase processing capacity. OPM and the Association agreed that Medco would send medications by overnight mail to customers who would not otherwise receive their prescriptions within 5 business days. Between the weeks ending January 6, 1996, and April 27, 1996, Medco sent approximately 160,000 prescription packages by overnight mail at a cost of almost $1 million. In February 1996, OPM also indicated that the Association should arrange for mail order customers who needed delayed medications to get up to a 21-day supply from PCS network retail pharmacies without paying the 20-percent copayment. This ad hoc arrangement required PCS to respond quickly to the needs of the Association and over 5,000 enrollees who used this service. The copayments for over 10,000 retail prescriptions dispensed to these enrollees cost the plan approximately $291,000. Although Medco continued to use extra means to deliver prescriptions to enrollees through the last week of April 1996, Association data show that the mail order program began to meet performance expectations for turning around prescriptions within 5 days the week ending March 16, 1996. Medco had already begun to consistently meet performance expectations for customer service calls the week ending February 10, 1996. The difficulties enrollees had with the mail order program during early 1996 were reflected in an Association’s customer satisfaction survey of mail order customers. During the first quarter of 1996, about 81 percent of those surveyed indicated that they were satisfied with services. Enrollee responses indicated that they were most concerned about the time it took to fill prescriptions. About 75 percent responded that their prescriptions were filled promptly, down from quarterly averages of 94 percent in 1994 and 92 percent in 1995. NACDS and many chain and independent pharmacies foresee the benefit change shifting millions of dollars in prescription drug sales to the mail order program. Because the benefit change is recent, we could not determine how many federal enrollees affected by the change will continue to shift prescriptions to the mail order program. Therefore, determining the benefit change’s effect on retail pharmacies’ sales is difficult. Nevertheless, payments to retail pharmacies for prescriptions dispensed to enrollees affected by the benefit change decreased substantially from 1995 to 1996, according to our analysis of PCS payments to retail pharmacies. (See fig. 3.) Figure 3 shows that between January and May 1995, total prescription payments to retail pharmacies for prescriptions dispensed to enrollees affected by the benefit change were about $259.6 million, compared with about $164.9 million between January and May 1996—a decrease of about 36 percent. Retail pharmacies serving the largest percentages of the federal enrollees affected by the benefit change experienced similar percentage decreases in prescription payments, according to PCS data. Between 1995 and 1996, Walgreens, Rite Aid, CVS, Revco, and Wal-Mart had, on average, a 41-percent decrease in total retail payments for prescriptions dispensed to the enrollees with Medicare part B coverage and a 14-percent decrease in total payments for prescriptions dispensed to all plan enrollees. Total payments to all retail pharmacies for prescriptions dispensed to enrollees in the Association’s federal employee health plan also decreased between 1995 and 1996. This total includes payments to enrollees affected by the benefit change. PCS data indicate that between January and May 1995, total payments were about $473.3 million, compared with about $439.8 million between January and May 1996—a decrease of about 7 percent. The Blue Cross and Blue Shield Association contracts with Medco and PCS include annual performance measures that focus on savings and customer service. The contracts provide financial incentives for exceeding certain performance measures and penalties for not meeting them. According to information from Association officials, in 1995, Medco and PCS met most of their savings and customer service measures for the Blue Cross and Blue Shield Service Benefit Plan. The Blue Cross and Blue Shield Association estimated that its two PBMs saved the plan about $505 million in 1995. Association officials indicated that these savings are used to support the pharmacy benefit program, as well as to contain enrollee premiums, deductibles, and copayments. Savings in 1995 resulted from seven categories of PBM services, according to Association estimates. These estimated savings were based on what the Association projected it would have paid for prescription drugs and related services had it not contracted with the PBMs. The Association developed this methodology, which represents one way to determine potential savings from PBM services. We plan to evaluate the soundness of this methodology and compare it with those developed by other federal health plans for our final report. Figure 4 shows the percentage of total savings each of seven service categories represents. Retail and mail order pharmacy discounts accounted for about $264 million in savings. For retail, the savings represent the discounts PCS achieved from negotiating with individual pharmacies the amount PCS would reimburse them for prescriptions. Mail order savings were derived from discounts that the Association negotiated with Medco. Maximum allowable cost (MAC) savings accounted for approximately $72 million in savings. MAC refers to the maximum price that retail pharmacies in PCS’ network may be paid for certain generic drugs. Savings resulted from the difference between drugs’ MAC prices and their usual and customary prices. Manufacturer rebates accounted for about $107 million in savings and represent the guaranteed discounts that PCS and Medco negotiated with drug manufacturers. The plan received 90 percent of the total rebates, and the PBMs retained 10 percent as an administrative fee and incentive to increase the amount of discounts. PCS did not meet its rebate guarantee in 1995 and as a result incurred a penalty. Concurrent and retrospective drug utilization review (DUR) accounted for about $10 million in savings that resulted from clinical activities the PBMs performed. Concurrent DUR is performed before dispensing a drug to prevent problems such as drug interactions and therapeutic duplications. Retrospective DUR is a program PCS conducts to encourage physicians and enrollees to use the most cost-effective drugs and regimens to optimize drug therapies. Medco’s intervention program accounted for about $13.5 million in savings. The program encourages patients to use, and physicians to prescribe, less expensive brand-name drugs considered as safe and effective as other, more expensive brand-name drugs. The prior approval program accounted for about $36.5 million in savings. This program covers 13 drugs that require Association approval before dispensing and derived savings from prescriptions denied reimbursement or never filled. The coordination of benefits (COB) program accounted for about $2 million in savings. COB is an industrywide method used to avoid paying duplicate benefits to an individual covered by another insurer. The Association’s contracts with its PBMs also specify performance measures for the quality of customer service provided to the federal plan and its enrollees. For example, as previously discussed, Medco’s contract requires dispensing prescriptions and answering customer calls within specific time frames. Medco’s contract also requires that its pharmacy dispense all of its prescriptions annually with less than a .005-percent error rate. In addition, PCS’ contract has several guarantees for the accuracy and timeliness of prescription claims submitted by enrollees for reimbursement. In two instances, PCS did not meet claims timeliness guarantees and therefore paid the Association minor penalties. PCS’ contract also guarantees that it provide plan enrollees convenient access to its network pharmacies. The guarantee states that a network pharmacy be located within 5 miles of 98 percent of the enrollees. PCS data indicate that this guarantee was met in 1995 and as of April 1996. Mr. Chairman, this concludes my prepared statement. I will be pleased to answer any questions. For more information on this testimony, please call John Hansen, Assistant Director, at (202) 512-7105. Other major contributors included Joel Hamilton, Jennifer Arns, and Mary Freeman. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed the Blue Cross and Blue Shield Association's change in prescription drug benefits covered under its federal employee health plan. GAO noted that: (1) the Association's payments for prescription drugs increased about 21 percent between 1988 and 1995; (2) to manage costs, the Association contracted with two pharmacy benefit managers (PBM) to provide retail prescription and mail order drug services; (3) to offset high prescription costs, the Association began requiring enrollees insured under the Standard Option Plan and covered by Medicare part B to pay 20 percent of their prescription costs at retail pharmacies; (4) the Association also encouraged enrollees to utilize the least expensive mail order option by offering prescriptions free of charge; (5) the demand for mail-order prescriptions surpassed contractor and Association expectations, causing delays in prescription orders and customer calls; (6) critics of the benefit change believe that millions of dollars in prescription drug sales will be shifted away from retail drug stores to the mail order program; and (7) the Association estimated that its two PBM saved $505 million in 1995.
A long-standing problem in DOD space acquisitions is that program and unit costs tend to go up significantly from initial cost estimates, while in some cases the capability that was to be delivered goes down. Figure 1 compares original cost estimates and current cost estimates for the broader portfolio of major space acquisitions for fiscal years 2010 through 2015. The wider the gap between original and current estimates, the fewer dollars DOD has available to invest in new programs. As shown in the figure, cumulative estimated costs for the major space acquisition programs have increased by about $13.9 billion from initial estimates for fiscal years 2010 through 2015, almost a 286 percent increase. The declining investment in the later years is the result of mature programs that have planned lower out-year funding, cancellation of several development efforts, and the exclusion of space acquisition efforts for which total cost data were unavailable (such as new investments). When space system investments other than established acquisition programs of record—such as the Defense Weather Satellite System (DWSS) and Space Fence programs—are also considered, DOD’s space acquisition investments remain significant through fiscal year 2016, as shown in figure 2. Although estimated costs for selected space acquisition programs decrease 21 percent between fiscal years 2010 and 2015, they start to increase in fiscal year 2016. And, according to current DOD estimates, costs for two programs— Advanced Extremely High Frequency (AEHF) and Space Based Infrared System (SBIRS) High—are expected to significantly increase in fiscal years 2017 and 2018. The costs are associated with the procurement of additional blocks of satellites and are not included in the figure because they have not yet been reported or quantified. Figures 3 and 4 reflect differences in total program and unit costs for satellites from the time the programs officially began to their most recent cost estimates. As figure 4 shows, in several cases, DOD has increased the number of satellites. The figures reflect total program cost estimates developed in fiscal year 2010. Several space acquisition programs are years behind schedule. Figure 5 highlights the additional estimated months needed for programs to launch their first satellites. These additional months represent time not anticipated at the programs’ start dates. Generally, the further schedules slip, the more DOD is at risk of not sustaining current capabilities. For example, delays in launching the first MUOS satellite have placed DOD’s ultra high frequency communications capabilities at risk of falling below the required availability level. DOD had long-standing difficulties on nearly every space acquisition program, struggling for years with cost and schedule growth, technical or design problems, as well as oversight and management weaknesses. However, to its credit, it continues to make progress on several of its high- risk space programs, and is expecting to deliver significant advances in capability as a result. The Missile Defense Agency’s (MDA) Space Tracking and Surveillance System (STSS) demonstration satellites were launched in September 2009. Additionally, DOD launched its first GPS IIF satellite in May 2010 and plans to launch the second IIF satellite in June 2011—later than planned, partially because of system-level problems identified during testing. It also launched the first AEHF satellite in August 2010—although it has not yet reached its final planned orbit because of an anomaly with the satellite’s propulsion system—and launched the Space Based Space Surveillance (SBSS) Block 10 satellite in September 2010. DOD is scheduled to launch a fourth Wideband Global SATCOM (WGS) satellite broadening communications capability available to warfighters—in late 2011, and a fifth WGS satellite in early 2012. The Evolved Expendable Launch Vehicle (EELV) program had its 41st consecutive successful operational launch in May of this year. One program that appears to have recently overcome remaining technical problems is the SBIRS High satellite program. The first of six geosynchronous earth-orbiting (GEO) satellites (two highly elliptical orbit sensors have already been launched) was launched in May 2011 and is expected to continue the missile warning mission with sensors that are more capable than the satellites currently on orbit. Total cost for the SBIRS High program is currently estimated at over $18 billion for six GEO satellites, representing a program unit cost of over $3 billion, about 233 percent more than the original unit cost estimate. Additionally, the launch of the first GEO satellite represents a delay of approximately 9 years. The reasons for the delay include poor government oversight of the contractor, unanticipated technical complexities, and rework. The program office is working to rebaseline the SBIRS High contract cost and schedule estimates for the sixth time. Because of the problems on SBIRS High, in 2007, DOD began a follow-on system effort, which was known as Third Generation Infrared Surveillance (3GIRS), to run in parallel with the SBIRS High program. DOD canceled the 3GIRS effort in fiscal year 2011, but plans to continue providing funds under the SBIRS High program for one of the 3GIRS infrared demonstrations. While DOD is having success in readying some satellites for launch, other space acquisition programs face challenges that could further increase cost and delay delivery targets. The programs that may be susceptible to cost and schedule challenges include MUOS and the GPS IIIA program. Delays in the MUOS program have resulted in critical potential capability gaps for military and other government users. The GPS IIIA program was planned with an eye toward avoiding problems that plagued the GPS IIF program and it incorporated many of the best practices recommended by GAO, but the schedule leaves little room for potential problems and there is a risk that the ground system needed to operate the satellites will not be ready when the first satellite is launched. Additionally, the National Polar- orbiting Operational Environmental Satellite System (NPOESS) was restructured as a result of poor program performance and cost overruns, which caused schedule delays. These delays have resulted in a potential capability gap for weather and environmental monitoring. Furthermore, new space system acquisition efforts getting underway—including the Air Force’s Joint Space Operations Center Mission System (JMS) and Space Fence, and MDA’s Precision Tracking and Surveillance System (PTSS)— face potential development challenges and risks, but it is too early to tell how significant they may be to meeting cost, schedule, and performance goals. Table 1 describes the status of these efforts in more detail. Over the past year, we have completed reviews of sustaining and upgrading GPS capabilities and commercializing space technologies under the Small Business Innovation Research program (SBIR), and we have ongoing reviews of (1) DOD space situational awareness (SSA) acquisition efforts, (2) parts quality for DOD, MDA, and the National Aeronautics and Space Administration (NASA), and (3) a new acquisition strategy being developed for the EELV program. These reviews, discussed further below, underscore the varied challenges that still face the DOD space community as it seeks to complete problematic legacy efforts and deliver modernized capabilities. Our reviews of GPS and space situational awareness, for instance, have highlighted the need for more focused coordination and leadership for space activities that touch a wide range of government, international, and industry stakeholders; while our review of the SBIR program highlighted the substantial barriers and challenges small business must overcome to gain entry into the government space arena. GPS. We found that the GPS IIIA schedule remains ambitious and could be affected by risks such as the program’s dependence on a ground system that will not be completed until after the first IIIA launch. We found that the GPS constellation availability had improved, but in the longer term, a delay in the launch of the GPS IIIA satellites could still reduce the size of the constellation to fewer than 24 operational satellites—the number that the U.S. government commits to—which might not meet the needs of some GPS users. We also found that the multiyear delays in the development of GPS ground control systems were extensive. Although the Air Force had taken steps to enable quicker procurement of military GPS user equipment, there were significant challenges to its implementation. This has had a significant impact on DOD as all three GPS segments—space, ground control, and user equipment—must be in place to take advantage of new capabilities. Additionally, we found that DOD had taken some steps to better coordinate all GPS segments, including laying out criteria and establishing visibility over a spectrum of procurement efforts, but it did not go as far as we recommended in 2009 in terms of establishing a single authority responsible for ensuring that all GPS segments are synchronized to the maximum extent practicable. Such an authority is warranted given the extent of delays, problems with synchronizing all GPS segments, and importance of new capabilities to military operations. As a result, we reiterated the need to implement our prior recommendation. Small Business Innovation Research (SBIR). In response to a request from this subcommittee, we found that while DOD is working to commercialize space-related technologies under its SBIR program by transitioning these technologies into acquisition programs or the commercial sector, it has limited insight into the program’s effectiveness. Specifically, DOD has invested about 11 percent of its fiscal years 2005–2009 research and development funds through its SBIR program to address space-related technology needs. Additionally, DOD is soliciting more space-related research proposals from small businesses. Further, DOD has implemented a variety of programs and initiatives to increase the commercialization of SBIR technologies and has identified instances where it has transitioned space-related technologies into acquisition programs or the commercial sector. However, DOD lacks complete commercialization data to determine the effectiveness of the program in transitioning space-related technologies into acquisition programs or the commercial sector. Of the nearly 500 space-related contracts awarded in fiscal years 2005 through 2009, DOD officials could not, for various reasons, identify the total number of technologies that transitioned into acquisition programs or the commercial sector. Further, there are challenges to executing the SBIR program that DOD officials acknowledge and are planning to address, such as the lack of overarching guidance for managing the DOD SBIR program. Under this review, most stakeholders we spoke with—DOD, prime contractors, and small business officials—generally agreed that small businesses participating in the DOD SBIR program face difficulties transitioning their space-related technologies into acquisition programs or the commercial sector. Although we did not assess the validity of the concerns cited, stakeholders we spoke with identified challenges inherent to developing space technologies; challenges because of the SBIR program’s administration, timing, and funding issues; and other challenges related to participating in the DOD space system acquisitions environment. For example, some small-business officials said that working in the space community is challenging because the technologies often require more expensive materials and testing than other technologies. They also mentioned that delayed contract awards and slow contract disbursements have caused financial hardships. Additionally, several small businesses cited concerns with safeguarding their intellectual property. Space Situational Awareness (SSA). We have found that while DOD has significantly increased its investment and planned investment in SSA acquisition efforts in recent years to address growing SSA capability shortfalls, most efforts designed to meet these shortfalls have struggled with cost, schedule, and performance challenges and are rooted in systemic problems that most space system acquisition programs have encountered over the past decade. Consequently, in the past 5 fiscal years, DOD has not delivered significant new SSA capabilities as originally expected. Capabilities that were delivered served to sustain or modernize existing systems versus closing capability gaps. To its credit, last fall the Air Force launched a space- based sensor that is expected to appreciably enhance SSA. However, two critical acquisition efforts that are scheduled to begin development within the next 2 years—Space Fence and JMS—face development challenges and risks, such as the use of immature technologies and planning to deliver all capabilities in a single, large increment versus smaller and more manageable increments. It is essential that these acquisitions are placed on a solid footing at the start of development to help ensure that their capabilities are delivered to the warfighter as and when promised. DOD plans to begin delivering other new capabilities in the coming 5 years, but it is too early to determine the extent to which these additions will address capability shortfalls. We have also found that there are significant inherent challenges to executing and overseeing the SSA mission, largely because of the sheer number of governmentwide organizations and assets involved in the mission. This finding is similar to what we have reported from other space system acquisition reviews over the years. Additionally, while the recently issued National Space Policy assigns SSA responsibility to the Secretary of Defense, the Secretary does not necessarily have the corresponding authority to execute this responsibility. However, actions, such as development of a national SSA architecture, are being taken that could help facilitate management and oversight governmentwide. The National Space Policy, which recognizes the importance of SSA, directs other positive steps, such as the determination of roles, missions, and responsibilities to manage national security space capabilities and the development of options for new measures for improving SSA capabilities. Furthermore, the recently issued National Security Space Strategy could help guide the implementation of the new space policy. We expect our report based on this review to be issued in June 2011. Parts quality for DOD, MDA, and NASA. Quality is paramount to the success of DOD space systems because of their complexity, the environment they operate in, and the high degree of accuracy and precision needed for their operations. Yet in recent years, many programs have encountered difficulties with quality workmanship and parts. For example, DOD’s AEHF protected communications satellite has yet to reach its intended orbit because of a blockage in a propellant line. Also, MDA’s STSS program experienced a 15-month delay in the launch of demonstration satellites because of a faulty manufacturing process of a ground-to-spacecraft communication system part. Furthermore, NASA’s Mars Science Laboratory program experienced a 1-year delay in the development of the descent and cruise stage propulsion systems because of a welding process error. We plan to issue a report on the results of a review that focuses specifically on parts quality issues in June 2011. We are examining the extent to which parts quality problems are affecting DOD, MDA, and NASA space and missile defense programs; the causes of these problems; and initiatives to detect and prevent parts quality problems. EELV acquisition strategy. DOD spends billions of dollars on launch services and infrastructure through two families of commercially owned and operated vehicles under the EELV program. This investment allows the nation to launch its national security satellites that provide the military and intelligence community with advanced space-based capabilities. DOD is preparing to embark on a new acquisition strategy for the EELV program. Given the costs and importance of space launch activities, it is vital that this strategy maximize cost efficiencies while still maintaining a high degree of mission assurance and a healthy industrial base. We are currently reviewing activities leading up to the strategy and plan to issue a report on the results of this review in June 2011. In particular, we are examining whether DOD has the knowledge it needs to develop a new EELV acquisition strategy and the extent to which there are important factors that could affect launch acquisitions. DOD continues to work to ensure that its space programs are more executable and produce a better return on investment. Many of the actions it has been taking address root causes of problems, though it will take time to determine whether these actions are successful and they need to be complemented by decisions on how best to lead, organize, and support space activities. Our past work has identified a number of causes of the cost growth and related problems, but several consistently stand out. First, on a broad scale, DOD has tended to start more weapon programs than it can afford, creating a competition for funding that encourages low cost estimating, optimistic scheduling, overpromising, suppressing bad news, and for space programs, forsaking the opportunity to identify and assess potentially more executable alternatives. Programs focus on advocacy at the expense of realism and sound management. Invariably, with too many programs in its portfolio, DOD is forced to continually shift funds to and from programs—particularly as programs experience problems that require additional time and money to address. Such shifts, in turn, have had costly, reverberating effects. Second, DOD has tended to start its space programs too early, that is, before it has the assurance that the capabilities it is pursuing can be achieved within available resources and time constraints. This tendency is caused largely by the funding process, since acquisition programs attract more dollars than efforts concentrating solely on proving technologies. Nevertheless, when DOD chooses to extend technology invention into acquisition, programs experience technical problems that require large amounts of time and money to fix. Moreover, when this approach is followed, cost estimators are not well positioned to develop accurate cost estimates because there are too many unknowns. Put more simply, there is no way to accurately estimate how long it would take to design, develop, and build a satellite system when critical technologies planned for that system are still in relatively early stages of discovery and invention. Third, programs have historically attempted to satisfy all requirements in a single step, regardless of the design challenges or the maturity of the technologies necessary to achieve the full capability. DOD has preferred to make fewer but heavier, larger, and more complex satellites that perform a multitude of missions rather than larger constellations of smaller, less complex satellites that gradually increase in sophistication. This has stretched technology challenges beyond current capabilities in some cases and vastly increased the complexities related to software. Programs also seek to maximize capability on individual satellites because it is expensive to launch them. Figure 6 illustrates the various factors that can break acquisitions. Many of these underlying issues affect the broader weapons portfolio as well, though we have reported that space programs are particularly affected by the wide disparity of users, including DOD, the intelligence community, other federal agencies, and in some cases, other countries, U.S. businesses, and citizens. Moreover, problematic implementation of an acquisition strategy in the 1990s, known as Total System Performance Responsibility, for space systems resulted in problems on a number of programs because it was implemented in a manner that enabled requirements creep and poor contractor performance—the effects of which space programs are finally overcoming. We have also reported on shortfalls in resources for testing new technologies, which, coupled with less expertise and fewer contractors available to lead development efforts, have magnified the challenge of developing complex and intricate space systems. Our work—which is largely based on best practices in the commercial sector—has recommended numerous actions that can be taken to address the problems we identified. Generally, we have recommended that DOD separate technology discovery from acquisition, follow an incremental path toward meeting user needs, match resources and requirements at program start, and use quantifiable data and demonstrable knowledge to make decisions to move to next phases. We have also identified practices related to cost estimating, program manager tenure, quality assurance, technology transition, and an array of other aspects of acquisition program management that could benefit space programs. These practices are highlighted in appendix I. Over the past several years, DOD has implemented or has been implementing a number of actions to reform how space and weapon systems are acquired, both through its own initiatives as well as those required by statute. Additionally, DOD is evaluating and proposing new actions to increase space system acquisition efficiency and effectiveness. Because many of these actions are relatively new, or not yet fully implemented, it is too early to tell whether they will be effective or effectively implemented. For space in particular, DOD is working to ensure that critical technologies are matured before large-scale acquisition programs begin, requirements are defined early in the process and are stable throughout, and system design remains stable. DOD also intends to follow incremental or evolutionary acquisition processes versus pursuing significant leaps in capabilities involving technology risk and has done so with the only new major satellite program undertaken by the Air Force in recent years—GPS IIIA. DOD is also providing more program and contractor oversight and putting in place military standards and specifications in its acquisitions. Additionally, DOD and the Air Force are working to streamline management and oversight of the national security space enterprise. For example, all Air Force space system acquisition responsibility has been aligned to the office that has been responsible for all other Air Force acquisition efforts, and the Defense Space Council—created last year—is reviewing, as one of its first agenda items, options for streamlining the many committees, boards, and councils involved in space issues. These and other actions that have been taken or are being taken that could improve space system acquisition outcomes are described in table 2. At the DOD-wide level, and as we reported last year, Congress and DOD have recently taken major steps toward reforming the defense acquisition system in ways that may increase the likelihood that weapon programs will succeed in meeting planned cost and schedule objectives. In particular, new DOD policy and legislative provisions place greater emphasis on front-end planning and establishing sound business cases for starting programs. For example, the provisions require programs to invest more time and resources to refine concepts through practices such as early systems engineering, strengthen cost estimating, develop technologies, build prototypes, hold early milestone reviews, and develop preliminary designs before starting system development. These provisions are intended to enable programs to refine a weapon system concept and make cost, schedule, and performance trade-offs before significant commitments are made. In addition, DOD policy requires establishment of configuration steering boards that meet annually to review program requirements changes as well as to make recommendations on proposed descoping options that could reduce program costs or moderate requirements. Fundamentally, these provisions should help (1) programs replace risk with knowledge and (2) set up more executable programs. Key DOD and legislative provisions compared with factors we identified in programs that have been successful in meeting cost and schedule baselines are summarized in table 3. Furthermore, the Ike Skelton National Defense Authorization Act for Fiscal Year 2011, signed into law on January 7, 2011, contains further direction aimed at improving acquisition outcomes, including, among other things, a requirement for the Secretary of Defense to issue guidance on the use of manufacturing readiness levels (including specific levels that should be achieved at key milestones and decision points), elevating the role of combatant commanders in DOD’s requirements-setting process, and provisions for improving the acquisition workforce. While it is too soon to determine if Congress’s and DOD’s reform efforts will improve weapon program outcomes, DOD has taken steps to implement the provisions. For example, in December 2009, the department issued a new implementation policy, which identifies roles and responsibilities and institutionalizes many of the requirements of the Weapon Systems Acquisition Reform Act of 2009. DOD has also filled several key leadership positions created by the legislation, including the Directors for Cost Assessment and Program Evaluation, Developmental Test and Evaluation, Systems Engineering, and Performance Assessments and Root Cause Analyses. To increase oversight, the department embarked on a 5-year effort to increase the size of the acquisition workforce by up to 20,000 personnel by 2015. Furthermore, the department began applying the acquisition reform provisions to some new programs currently in the planning pipeline. For example, many of the pre- Milestone B programs we reviewed this year as part of our annual assessment of selected weapon programs planned to conduct preliminary design reviews before going to Milestone B, although fewer are taking other actions, such as developing prototypes, that could improve their chances of success. With respect to space system acquisitions, particularly GPS III—DOD’s newest major space system acquisition—has embraced the knowledge-based concepts behind our previous recommendations as a means of preventing large cost overruns and schedule delays. Additionally, the Office of the Secretary of Defense and the Air Force are proposing new acquisition strategies for satellites and launch vehicles: In June of last year, and as part of the Secretary of Defense’s Efficiencies Initiative, the Under Secretary of Defense for Acquisition, Technology and Logistics began an effort to restore affordability and productivity in defense spending. Major thrusts of this effort include targeting affordability and controlling cost growth, incentivizing productivity and innovation in industry, promoting real competition, improving tradecraft in services acquisition, and reducing nonproductive processes and bureaucracy. As part of this effort, the Office of the Secretary of Defense and the Air Force are proposing a new acquisition strategy for procuring satellites, called the Evolutionary Acquisition for Space Efficiency (EASE), to be implemented starting in fiscal year 2012. Primary elements of this strategy include block buys of two or more satellites (economic order quantities) using a multiyear procurement construct, use of fixed-price contracting, stable research and development investment, evolutionary development, and stable requirements. According to DOD, EASE is intended to help stabilize funding, staffing, and subtier suppliers; help ensure mission continuity; reduce the impacts associated with obsolescence and production breaks; and increase long-term affordability with cost savings of over 10 percent. DOD anticipates first applying the EASE strategy to procuring two AEHF satellites beginning in fiscal year 2012, followed by procurement of two SBIRS High satellites beginning in fiscal year 2013. According to the Air Force, it will consider applying the EASE strategy—once it is proven—to other space programs, such as GPS III. We have not yet conducted a review of the EASE strategy to assess the potential benefits, challenges, and risks of its implementation. Questions about this approach would include the following: What are the major risks incurred by the government in utilizing the EASE acquisition strategy? What level of risks (known unknowns and unknown unknowns) is being assumed in the estimates of savings to be accrued from the EASE strategy? How are evolutionary upgrades to capabilities to be pursued under EASE? How does the EASE acquisition strategy reconcile with the current federal and DOD acquisition policy, acquisition and financial management regulations, and law? The Air Force is developing a new acquisition strategy for its EELV program. Primarily, under the new strategy, the Air Force and National Reconnaissance Office are expected to initiate block buys of eight first stage booster cores—four for each EELV family, Atlas V and Delta IV—per year over 5 years to help stabilize the industrial base, maintain mission assurance, and avoid cost increases. As mentioned earlier, we have initiated a review of the development of the new strategy and plan to issue a report on our findings in June 2011. Given concerns raised through recent studies about visibility into costs and the industrial base supporting EELV, it is important that this strategy be supported with reliable and accurate data. The actions that the Office of the Secretary of Defense and the Air Force have been taking to address acquisition problems listed in tables 2 and 3 are good steps. However, more changes to processes, policies, and support may be needed—along with sustained leadership and attention—to help ensure that these reforms can take hold, including addressing the diffuse leadership for space programs. Diffuse leadership has had a direct impact on the space system acquisition process, primarily because it has made it difficult to hold any one person or organization accountable for balancing needs against wants, for resolving conflicts among the many organizations involved with space, and for ensuring that resources are dedicated where they need to be dedicated. This has hampered DOD’s ability to synchronize delivery of space, ground, and user assets for space programs. For instance, many of the cost and schedule problems we identified on the GPS program were tied in part to diffuse leadership and organizational stovepipes throughout DOD, particularly with respect to DOD’s ability coordinate delivery of space, ground, and user assets. Additionally, we have recently reported that DOD faces a situation where satellites with advances in capability will be residing for years in space without users being able to take full advantage of them because investments and planning for ground, user, and space components were not well coordinated. Specifically, we found that the primary cause for user terminals not being well synchronized with their associated space systems is that user terminal development programs are typically managed by different military acquisition organizations than those managing the satellites and ground control systems. Recent studies and reviews examining the leadership, organization, and management of national security space have found that there is no single authority responsible below the President and that authorities and responsibilities are spread across the department. In fact, the national security space enterprise comprises a wide range of government and nongovernment organizations responsible for providing and operating space-based capabilities serving both military and intelligence needs. While some changes to the leadership structure have recently been made—including revalidating the role of the Secretary of the Air Force as the DOD Executive Agent for Space, disestablishing the Office of the Assistant Secretary of Defense for Networks and Information Integration and the National Security Space Office, and aligning Air Force space system acquisition responsibility into a single Air Force acquisition office—and others are being studied, it is too early to tell how effective these changes will be in streamlining management and oversight of space system acquisitions. Additionally, while the recently issued National Space Policy assigns responsibilities for governmentwide space capabilities, such as those for SSA, it does not necessarily assign the corresponding authority to execute the responsibilities. Finally, adequate workforce capacity is essential for the front-end planning activities now required by acquisition reform initiatives for new weapon programs to be successful. However, studies have identified insufficient numbers of experienced space system acquisition personnel and inadequate continuity of personnel in project management positions as problems needing to be addressed in the space community. For example, a recent Secretary of the Air Force-directed Broad Area Review of space launch noted that while the Air Force Space and Missile Systems Center workforce had decreased by about 25 percent in the period from 1992 to 2010, the number of acquisition programs had increased by about 41 percent in the same time period. Additionally, our own studies have identified gaps in key technical positions, which we believed increased acquisition risks. For instance, in a 2008 review of the EELV program, we found that personnel shortages in the EELV program office occurred particularly in highly specialized areas. According to the EELV program office and Broad Area Review, this challenge persists. DOD is working to position itself to improve its space system acquisitions. After more than a decade of acquisition difficulties—which have created potential gaps in capability, diminished DOD’s ability to invest in new space systems, and lessened DOD’s credibility to deliver high-performing systems within budget and on time—DOD is starting to launch new generations of satellites that promise vast enhancements in capability. In 1 year, DOD has or expects to have launched newer generations of navigation, communications, SSA, and missile warning satellites. Moreover, given the nation’s fiscal challenges, DOD’s focus on fixing problems and implementing reforms rather than taking on new, complex, and potentially higher-risk efforts is promising. However, challenges to keeping space system acquisitions on track remain, including pursuing evolutionary acquisitions over revolutionary ones, managing requirements, providing effective coordination across the diverse organizations interested in space-based capabilities, and ensuring that technical and programmatic expertise are in place to support acquisitions. DOD’s newest major space system acquisition efforts, such as GPS IIIA, DWSS, JMS, Space Fence, and the follow-on to the SBSS will be key tests of how well DOD’s reforms and reorganizations have positioned it to manage these challenges. We look forward to working with DOD to help ensure that these and other challenges are addressed. Chairman Nelson, Ranking Member Sessions, this completes my prepared statement. I would be happy to respond to any questions you or other Members of the Subcommittee may have at this time. For further information about this statement, please contact Cristina Chaplain at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Pubic Affairs may be found on the last page of this statement. Individuals who made key contributions to this statement include Art Gallegos, Assistant Director; Kristine Hassinger; Arturo Holguín; Rich Horiuchi; Roxanna Sun; and Bob Swierczek. Prioritize investments so that projects can be fully funded and it is clear where projects stand in relation to the overall portfolio. Follow an evolutionary path toward meeting mission needs rather than attempting to satisfy all needs in a single step. Match requirements to resources—that is, time, money, technology, and people—before undertaking a new development effort. Research and define requirements before programs are started and limit changes after they are started. Ensure that cost estimates are complete, accurate, and updated regularly. Commit to fully fund projects before they begin. Ensure that critical technologies are proven to work as intended before programs are started. Assign more ambitious technology development efforts to research departments until they are ready to be added to future generations (increments) of a product. Use systems engineering to close gaps between resources and requirements before launching the development process. Use quantifiable data and demonstrable knowledge to make go/no-go decisions, covering critical facets of the program such as cost, schedule, technology readiness, design readiness, production readiness, and relationships with suppliers. Do not allow development to proceed until certain thresholds are met—for example, a high proportion of engineering drawings completed or production processes under statistical control. Empower program managers to make decisions on the direction of the program and to resolve problems and implement solutions. Hold program managers accountable for their choices. Require program managers to stay with a project to its end. Hold suppliers accountable to deliver high-quality parts for their products through such activities as regular supplier audits and performance evaluations of quality and delivery, among other things. Encourage program managers to share bad news, and encourage collaboration and communication. In preparing this testimony, we relied on our body of work in space programs, including previously issued GAO reports on assessments of individual space programs, common problems affecting space system acquisitions, and the Department of Defense’s (DOD) acquisition policies. We relied on our best practices studies, which comment on the persistent problems affecting space system acquisitions, the actions DOD has been taking to address these problems, and what remains to be done, as well as Office of the Secretary of Defense and Air Force documents addressing these problems and actions. We also relied on work performed in support of our annual weapons system assessments, and analyzed DOD funding estimates to assess cost increases and investment trends for selected major space system acquisition programs. The GAO work used in preparing this statement was conducted in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Despite decades of significant investment, most of the Department of Defense's (DOD) large space acquisition programs have collectively experienced billions of dollars in cost increases, stretched schedules, and increased technical risks. Significant schedule delays of as much as 9 years have resulted in potential capability gaps in missile warning, military communications, and weather monitoring. These problems persist, with other space acquisition programs still facing challenges in meeting their targets and aligning the delivery of assets with appropriate ground and user systems. To address cost increases, DOD reduced the number of satellites it would buy, reduced satellite capabilities, or terminated major space system acquisitions. Broad actions have also been taken to prevent their occurrence in new programs, including better management of the acquisition process and oversight of its contractors and resolution of technical and other obstacles to DOD's ability to deliver capability. This testimony will focus on the (1) status of space system acquisitions, (2) results of GAO's space-related reviews over the past year and the challenges they signify, (3) efforts DOD has taken to address causes of problems and increase credibility and success in its space system acquisitions as well as efforts currently underway, and (4) what remains to be done. Over the past two decades, DOD has had difficulties with nearly every space acquisition program, with years of cost and schedule growth, technical and design problems, and oversight and management weaknesses. However, to its credit, DOD continues to make progress on several of its programs--such as the Space Based Infrared System High and Advanced Extremely High Frequency programs--and is expecting to deliver significant advances in capability as a result. But other programs continue to be susceptible to cost and schedule challenges. For example, the Global Positioning System (GPS) IIIA program's total cost has increased by about 10 percent over its original estimate, and delays in the Mobile User Objective System continue the risk of a capability gap in ultra high frequency satellite communications. In 2010, GAO assessed DOD's efforts to (1) upgrade and sustain GPS capabilities and (2) commercialize or incorporate into its space acquisition program the space technologies developed by small businesses. These reviews underscore the varied challenges that still face the DOD space community as it seeks to complete problematic legacy efforts and deliver modernized capabilities--for instance, the need for more focused coordination and leadership for space activities--and highlight the substantial barriers and challenges that small businesses must overcome to gain entry into the government space arena. DOD continues to work to ensure that its space programs are more executable and produce a better return on investment. Many of the actions it has been taking address root causes of problems, though it will take time to determine whether these actions are successful. For example, DOD is working to ensure that critical technologies are matured before large-scale acquisition programs begin and requirements are defined early in the process and are stable throughout. Additionally, DOD and the Air Force are working to streamline management and oversight of the national security space enterprise. While DOD actions to date have been good, more changes to processes, policies, and support may be needed--along with sustained leadership and attention--to help ensure that these reforms can take hold, including addressing the diffuse leadership for space programs. While some changes to the leadership structure have recently been made and others are being studied, it is too early to tell how effective they will be in streamlining management and oversight of space system acquisitions. Finally, while space system acquisition workforce capacity is essential if new weapon programs are to be successful, DOD continues to face gaps in technical and programmatic expertise for space.
Historically, tribes have been granted federal recognition through treaties, by the Congress, or through administrative decisions within the executive branch—principally by BIA within the Department of the Interior. (See app. I for additional information on how tribes have been recognized.) In a 1977 report to the Congress, the American Indian Policy Review Commission found that the criteria used by the Department to assess a group’s status were not very clear and concluded that a large part of its recognition policy depended on which official responded to the group’s inquiries. Until the 1960s, the limited number of requests by groups to be federally recognized permitted the Department to assess a group’s status on a case-by-case basis without formal guidelines. However, in response to an increase in the number of requests for federal recognition, the Department determined that it needed a uniform approach to evaluate these requests. In 1978, it established a regulatory process for recognizing tribes. In 1994, the Department revised the regulations to clarify what evidence was needed to support the requirements for recognition, although the basic criteria used to evaluate a petition were not changed.In addition, in 1997 BIA updated guidelines on the process, and in February 2000, BIA issued a notice in the Federal Register clarifying internal processing procedures. In summary, a group enters the regulatory process and becomes a petitioner by submitting a letter of intent requesting recognition. A petitioner then must provide documentation that addresses seven criteria that, in general, demonstrates continuous existence as a political and social community that is descended from a historic tribe. The technical staff within BIA’s Branch of Acknowledgement and Research (BAR) reviews the submitted documentation, provides technical review and assistance, and determines, with the petitioner’s concurrence, when the petition is ready for active consideration. Once the petition enters active consideration, the BAR staff reviews the documented petition and makes recommendations on a proposed finding either for or against recognition. Staff recommendations are subject to review by the Department’s Office of the Solicitor and senior officials within BIA, culminating with the approval of the Assistant Secretary-Indian Affairs. After a proposed finding is approved by the Assistant Secretary, it is published in the Federal Register and a period of further comment, document submission, and response is allowed. The BAR staff reviews comments, documentation, and responses and makes recommendations on a final determination that are subject to the same levels of review as a proposed finding. The process culminates in a final determination by the Assistant Secretary that, depending on the nature of further evidence submitted, may or may not be the same as the proposed finding. Requests for reconsideration may be filed with the Interior Board of Indian Appeals within 90 days after the final determination. This review process can result in affirmation of the Assistant Secretary’s decision or direction to the Assistant Secretary to issue a reconsidered determination. BIA has received 250 petitions for recognition under this process. However, many of these petitions consist only of letters of intent to petition or are petitions for which only partial documentation has been submitted. Others are no longer active because they have been withdrawn or resolved outside the regulatory process or because a petitioner has lost contact with BIA. In fact, of the 250 petitions, only 55 have completed documentation that allows them to be considered by the process. For those completed petitions, BIA has finalized 29 decisions—14 recognizing a tribe and 15 denying recognition. Of the remaining 26 completed petitions, 3 decisions are pending; 13 are under active consideration; and 10 are ready, awaiting active consideration. A complete outline of the process and the status of the 250 petitions are provided in appendix II. With federal recognition, Indian tribes become eligible to participate in billion dollar federal assistance programs and can be granted significant privileges as sovereign entities—including exemptions from state and local jurisdiction and the ability to establish casino gambling operations. Federally recognized tribes and their members have almost exclusive access to about $4 billion annually in federal funding through direct payments and services unavailable to the general public or to Indians that are not members of recognized tribes. For example, tribal governments can receive direct payments to provide community services, such as health clinics or sewer improvements, and members of tribes may be eligible for housing programs or small business loans. The exemptions from state and local jurisdiction for recognized tribes generally apply to lands that the federal government has taken in trust for a tribe or its members. Currently, about 54 million acres of land are being held in trust. The Indian Gaming Regulatory Act of 1988 (IGRA), which regulates Indian gambling operations, permits a tribe to operate casinos on land in trust if the state in which it lies allows casino-like gambling and the tribe has entered into a compact with the state regulating its gambling businesses. In 1999, tribes collected about $10 billion in gambling revenue. Federal recognition provides a tribe with access to special Indian programs reserved almost exclusively for recognized tribes and their members. The Department of Health and Human Service’s Indian Health Service (IHS) and BIA—the two main agencies that provide funding and services to tribes and their members—have a combined annual budget of over $4 billion (see table 1). The combined funding for the two agencies has increased by $1 billion (in real terms) since the regulatory recognition process was established in 1978. Both agencies have established procedures for funding newly recognized tribes. At IHS, newly recognized tribes are assigned funds on a case-by-case basis. At BIA, newly recognized tribes with 1,500 members or less are provided with base funding of $160,000; tribes with 1,501 to 3,000 members are provided $300,000; and the base funding for tribes with more than 3,000 members is determined on a case-by-case basis. In addition to the funding and services from IHS and BIA, the Office of Management and Budget estimates that for fiscal year 2000 an additional $3.9 billion was appropriated for other federal programs specifically for Indians or set-asides for Indians within larger programs (see table 2). Federal recognition is not necessarily an eligibility requirement for these programs. In fact, the eligibility requirements for these programs vary widely, making it difficult to estimate the funding for programs that require federal recognition for eligibility. Tribes may have been eligible for some of these programs, or for similar programs, prior to their federal recognition. For example, the Department of Housing and Urban Development provides grant funding under the Native American Housing Assistance and Self-Determination Act to federally recognized tribes and some nonfederally recognized Indian groups. Additionally, Indians, as U.S. citizens, are eligible to receive assistance from any federal program for which they meet the eligibility requirements. By 1886, Indian lands had been reduced to about 140 million acres largely on reservations west of the Mississippi River. The federal government’s Indian policy encouraging assimilation further reduced Indian land holdings by two-thirds, to about 49 million acres in 1934. However, in 1934, the government’s Indian policy changed to encourage tribal self- governance with the Indian Reorganization Act. The act provided the Secretary the authority to take land in trust on behalf of federally recognized tribes or their members. Since 1934, the total acreage held in trust by the federal government for the benefit of tribes and their members has increased from about 49 million to about 54 million acres. Much of the recent controversy over recognition decisions, whether made by the Congress or the Department, stems from events that can only occur after a tribe is recognized. With recognition, the federal government can take land in trust for tribes that may not have a land base or may want to add to their land base. This raises concerns from local communities regarding the loss of local jurisdiction over the land. For example, land taken in trust is no longer subject to local property taxes and zoning ordinances. Additionally, gambling may occur on land held in trust by the federal government for tribes or their members. However, the process of taking land in trust, like gambling, is not governed by the same laws and regulations that govern tribal recognition. Land may be taken in trust through legislation or BIA regulations. The regulations governing the land- in-trust process became effective in October 1980 and set forth criteria, including the impact on the local tax base and jurisdictional problems, that the Secretary should consider in evaluating requests to take land in trust. At that time the regulations did not require notification of affected state and local communities, nor did they allow for outside comments. Taking land in trust became more controversial with the enactment of IGRA in 1988. In 1995, the land-in-trust regulations were revised to require that affected state and local governments be notified of each land-in-trust request and that they be given 30 days to submit written comments. The revised regulations also distinguished between on- and off- reservation acquisitions. The criteria for off-reservation acquisitions became more stringent, and state and local governments’ concerns were given more weight. Indian gambling, a relatively new phenomenon, started in the late 1970s when a number of Indian tribes began to establish bingo operations as a supplemental means of funding tribal operations. However, state governments began to question whether tribes possessed the authority to conduct gambling independently of state regulation. Although many lower courts upheld the tribal position, the matter was not resolved until 1987 when the U. S. Supreme Court issued its decision in California v. Cabazon Band of Mission Indians. That decision confirmed the tribes’ authority to establish gambling operations on their reservations outside state regulation—provided the affected state permitted some type of gambling. In 1988, the Congress passed IGRA, which established a regulatory framework to govern Indian gambling operations. One of the more important features of IGRA is that only federally recognized Indian tribes may engage in gambling. IGRA established three classes of gambling to be regulated by a combination of tribal governments, state governments, BIA, and the National Indian Gaming Commission (NIGC)— an entity created by IGRA to enforce IGRA requirements and to ensure the integrity of Indian gambling operations. Under IGRA, Class I gambling consists of social gambling for minimal prizes or ceremonial gambling. It is regulated solely by the tribe and requires no financial reporting to other authorities. Class II gambling consists of gambling pull-tabs, bingo-like games, and punch boards. A tribe may conduct, license, and regulate Class II gambling if (1) the state in which the tribe is located permits such gambling for any purpose by a person or organization and (2) the tribal governing body adopts a gambling ordinance that is approved by NIGC. Class III gambling consists of all other forms of gambling, including casino games, slot machines, and pari- mutuel betting. It, too, is only allowed in states that permit similar types of gambling. The courts have interpreted this to mean, for example, that even if a state only allows charitable casino nights and state-run lotteries, tribes may operate casinos. Additionally, to balance the interests of both the state and the tribe, IGRA requires that tribes and states negotiate a compact regulating the tribal gambling operations. The Department of the Interior must approve the compact. IGRA also requires a tribe to adopt a gambling ordinance, which must be approved by NIGC. According to the June 1999 final report of the National Gambling Impact Study Commission, gambling revenues have proven to be a critical source of funding for many tribal governments, providing much needed improvements in the health, education, and welfare of Indians living on reservations across the United States. In the 5-year span from fiscal years 1995 through 1999, gambling revenues have almost doubled from $5.5 billion to $9.8 billion—surpassing even Nevada with fiscal year 1999 revenues of $8.5 billion and Atlantic City with $4.2 billion. However, of the 561 recognized tribes, only 193 tribes, or about 34 percent, actually participate in gambling and only 27 tribes (or about 5 percent) generate more than $100 million on an annual basis. According to NIGC, during fiscal year 1999, those 27 tribes produced two-thirds of all Indian gambling revenue—$6.4 billion out of total revenues of $9.8 billion. According to the National Gambling Impact Study Commission report, some tribes have rejected Indian gambling in referenda. The report notes that other tribal governments are in the midst of policy debates about whether to permit gambling and related commercial developments on their reservations. Not all gambling facilities achieve the same benefits or success. Some tribes operate their casinos at a loss, and a few have been forced to close money- losing facilities. Appendix III provides more detailed information on Indian gambling operations. We have identified areas in the BIA regulatory process where changes could better ensure more predictable and timely decisions. First, clearer guidance is needed on the key aspects of the criteria and supporting evidence used in recognition decisions. In particular, guidance is needed in instances when limited evidence is available to demonstrate petitioner compliance with criteria. The Department has continued to struggle with the question of what level of evidence is sufficient to meet criteria in recognition cases. The lack of guidance in this area creates controversy and uncertainty for all parties about the basis for decisions reached. Second, the process is also hampered by limited resources, a lack of time frames, and ineffective procedures for providing information to interested third parties. As a result, there is a growing number of completed petitions waiting to be considered. BIA officials estimate that it may take up to 15 years before all these currently completed petitions are resolved, despite the fact that active consideration of a completed petition was designed to reach a final decision in about 2 years. BIA regulations lay out seven criteria that must all be met before a group can become a federally recognized tribe. These criteria, if met, identify those Indian groups with inherent sovereignty that have existed continuously and that are entitled to a government-to-government relationship with the United States. In general, a technical staff within BIA, consisting of historians, anthropologists, and genealogists, evaluates the evidence submitted by a petitioner and makes a recommendation on whether or not to recognize the group as a tribe. After being reviewed by Bureau officials and the Department’s Office of the Solicitor, the recommendation is presented to the Assistant Secretary-Indian Affairs, who may accept or reject the recommendation. The regulations also call for guidelines that explain the criteria, the types of evidence that may be used to demonstrate particular criteria, and other information. However, the guidelines, which were last updated in 1997, do not provide much guidance on the consideration of the criteria and evidence. Rather, the guidelines are generally geared toward providing petitioners with a basic understanding of the process. The following are seven criteria for recognition under the regulatory process: (a) The petitioner has been identified as an American Indian entity on a substantially continuous basis since 1900. (b) A predominant portion of the petitioning group comprises a distinct community and has existed as a community from historical times until the present. (c) The petitioner has maintained political influence or authority over its members as an autonomous entity from historical times until the present. (d) The group must provide a copy of its present governing documents and membership criteria. (e) The petitioner’s membership consists of individuals who descend from a historical Indian tribe or tribes, which combined and functioned as a single autonomous political entity. (f) The membership of the petitioning group is composed principally of persons who are not members of any acknowledged North American Indian tribe. (g) Neither the petitioner nor its members are the subject of congressional legislation that has expressly terminated or forbidden recognition. While we found general agreement about the seven criteria that groups must meet to be granted recognition, there is great potential for disagreement when evidence to support the criteria is lacking. The need for clearer guidance on criteria and evidence used in recognition decisions became evident in a number of recent cases. The BIA technical staff, in conducting a detailed review of the evidence submitted, relies on precedents from past decisions in assessing whether a petitioner meets the criteria in order to ensure consistency in its recommendations. However, the Assistant Secretary has rejected several recent recommendations made by the technical staff, all resulting in either proposed or final decisions to recognize tribes when the staff had recommended against recognition. While the technical staff claims that its recommendations were based on precedent, transparent guidance on past precedents is not readily available to affected parties or the decisionmaker. At the same time, while the Assistant Secretary is charged with making the final decisions, it is not always clear why the Assistant Secretary differed with the technical staff recommendations. Much of the current controversy surrounding the regulatory process stems from these cases. The regulations state that lack of evidence is cause for denial, but they note that historical situations and inherent limitations in the availability of evidence must be considered. At the heart of the recent differences between the staff’s recommendations and the Assistant Secretary’s decisions are different positions on what is required to support two key aspects of the criteria. In particular, there are differences over (1) what is needed to demonstrate continuous existence and (2) the proportion of members of the petitioning group that must demonstrate descent from a historic tribe. Concerns over what constitutes continuous existence have centered on the allowable gap in time during which there is limited or no evidence that a petitioner has met one or more of these criteria. In one case, the technical staff recommended that a petitioner not be recognized because there was a 70-year period for which there was no evidence that the petitioner satisfied the criteria for continuous existence as a distinct community exhibiting political authority. The technical staff concluded that a 70-year evidentiary gap was too long to support a finding of continuous existence. The staff based its conclusion on precedent established through previous decisions where the absence of evidence for shorter periods of time had served as grounds for finding that petitioners did not meet these criteria. However, in this case, the Assistant Secretary issued a proposed finding to recognize the petitioner, concluding that continuous existence could be presumed despite the lack of specific evidence for a 70-year period. The 1997 guidelines generally do not provide any discussion of past precedents in dealing with gaps in evidence when trying to meet the continuous existence criterion. Furthermore, while the regulations allow for the consideration of reasons that might limit available evidence, the Assistant Secretary’s decision did not explain why evidence might be limited. Such an explanation would seem appropriate as part of the report called for in the regulations that summarizes the evidence, reasoning, and analyses that are the basis for proposed findings. The Department has grappled with this issue in the past. In updating the recognition regulations in 1994, it noted that the primary question of evidence in recognition cases is usually not how to weigh evidence for and against a position, but whether the level of evidence is high enough, even in the absence of negative evidence, to demonstrate meeting a criterion. For example, the 1994 regulations clarify the standard for demonstrating continuous existence by requiring that a petitioner demonstrate that it meets the criterion of a distinct community with political authority on a “substantially continuous basis” and by explaining that this does not require meeting the criterion at every point in time. However, the regulations specifically decline to define a permissible interval during which a group could be presumed to have continued to exist if the group could demonstrate its existence before and after the interval. BIA stated that establishing a specific interval would be inappropriate because the significance of the interval must be considered in light of the character of the group, its history, and the nature of the available evidence. BIA also noted that its experience has been that historical evidence of tribal existence is often not available in clear, unambiguous packets relating to particular points in time. While the consideration of continuous existence in light of limited evidence in different historical circumstances will always be a difficult issue, the 1997 guidelines, which could provide guidance based on how this issue was handled in previous cases, are largely silent on this issue. Another key aspect of criteria that has stirred up controversy and created uncertainty is the proportion of a petitioner’s membership that must demonstrate that it meets the criterion of descent from a historic Indian tribe. In one case, the technical staff recommended that a petitioner not be recognized because the petitioner could only demonstrate that 48 percent of its members were descendents. The technical staff concluded that finding that the petitioner had satisfied this criterion would have been a departure from precedent established through previous decisions in which petitioners found to meet this criterion had demonstrated a higher percentage of membership descent from a historic tribe. However, in the proposed finding, the Assistant Secretary found that the petitioner satisfied the criterion. The Assistant Secretary told us that this decision was not consistent with previous decisions by other Assistant Secretaries but that he believed the decision to be fair because the standard used for previous decisions was unfairly high. Clear guidance on this aspect of the criterion is lacking. The 1997 guidelines do not provide any information on past precedents used in assessing a petitioner’s ability to demonstrate descent. Further, the Assistant Secretary’s written decision did not explain why evidence might be limited and perhaps cause a deviation from past precedent or why past standards were unfairly high in this case. Without such an explanation, the report, which the regulations call for to summarize the evidence, reasoning, and analyses that serve as the basis for proposed findings, is incomplete. When the Department revised the regulations in 1994, it clarified what was required of petitioners to meet the criterion of membership descent from historic tribes to a modest extent. However, the Department stated that it intentionally avoided establishing a specific percentage of members required to demonstrate descent because the significance of the percentage varies with the history and nature of the petitioner and the particular reasons why a portion of the membership may not meet the requirements of the criterion. The current language under the criterion only states that a petitioner’s membership must consist of individuals who descend from historic tribes—no minimum percentage or quantifying term such as “most” or “some” is used; the 1997 guidelines note only that it need not be 100 percent demonstrated. Again, the 1997 guidelines provide no discussion of past precedents to provide guidance on how this issue was handled in the past. While the 1994 revision to the regulations helped clarify what is required of petitioners to be granted federal recognition, the Department intentionally left key aspects of the criteria open to interpretation to accommodate the unique characteristics of individual petitions. However, leaving key aspects open to interpretation increases the risk that the criteria may be applied inconsistently to different petitioners. To mitigate this risk, BIA uses precedents established in past decisions to provide guidance in interpreting key aspects in the criteria. A February 2000 Federal Register notice concerning changes to the internal processing of recognition petitions states that the process will continue to apply the precedents established in past decisions. However, the regulations and accompanying guidelines are silent regarding the role of precedent in making decisions or the circumstances that may cause deviation from precedent. Thus, it becomes difficult for petitioners, third parties, and future decisionmakers—who may want to consider precedents in past decisions—to understand the basis for some decisions reached. If there are precedents regarding aspects of criteria like continuous existence and the proportion of membership demonstrating descent, it is not clear what they are or how that information is made available to petitioners, third parties, and decisionmakers. Ultimately, BIA and the Assistant Secretary will still have to make difficult decisions about petitions when it is unclear whether a precedent applies or even exists. Because these circumstances require the judgment of the decisionmaker, acceptance of BIA and the Assistant Secretary as key decisionmakers is extremely important. A lack of clear and transparent explanations of the decisions reached may cast doubt on the objectivity of decisionmakers, making it difficult for parties on all sides to understand and accept decisions, regardless of the merit or direction of the decisions reached. Because of limited resources, a lack of time frames, and ineffective procedures for providing information to interested third parties, the length of time involved in reaching final decisions is substantial. The workload of BIA staff assigned to evaluate recognition decisions has increased while resources have declined. BIA, working in conjunction with a petitioner to ensure that all documentation is provided, determines when a petition is complete and thus ready for active consideration (ready status). Once a petition is deemed ready for active consideration, petitioners and other interested parties must wait until BIA has staff available to begin active consideration. BIA begins active consideration of the complete petition (active status) based on the order in which petitioners entered ready status. There was a large influx of petitions placed into ready status in the mid-1990s. Of the 55 petitions that BIA has placed in ready status since the inception of the regulatory process in 1978, 23 (42 percent) were placed there between 1993 and 1997 (see fig. 1). There are currently 10 petitions in ready status—and 6 of these have been waiting at least 5 years. In addition, BIA staff is fully committed to the active consideration of another 13 petitions. According to BIA staff, the petitions under active consideration and those awaiting review are becoming more complex and detailed as both petitioners and third parties, with increasing interests at stake, commit significant resources to their petitions and comments. The chief of the branch that is responsible for evaluating petitions told us that, based solely on the historic rate at which BIA has issued final determinations, it could take 15 years to resolve all the petitions currently awaiting active consideration. In contrast, the regulations outline a process for active consideration of a completed petition that should take about 2 years. Compounding the backlog of petitions awaiting evaluation, the increased number of related administrative responsibilities that the technical staff must assume further limits the proportion of their time spent on evaluating petitions. Although it could not provide precise data, BIA technical staff estimated that it spends up to 40 percent of its time on administrative responsibilities. In particular, there are substantial numbers of Freedom of Information Act (FOIA) requests for information related to petitions. Also, petitioners and third parties frequently file requests for reconsideration of recognition decisions that are reviewed by the Interior Board of Indian Appeals, requiring the staff to prepare the record and response to issues referred to the Board. Finally, the regulatory process has been subject to an increasing number of lawsuits from dissatisfied parties. These lawsuits include petitioners who have completed the process and been denied recognition as well as current petitioners who are dissatisfied with the amount of time it is taking to process their petitions. BIA is currently involved with 17 cases before Federal Circuit and District Courts concerning the recognition process. Eight of these cases are inactive for a variety of reasons such as the courts awaiting BIA action on pending petitions. However, depending on circumstances, these inactive cases may be reactivated at any moment. While the workload associated with evaluating petitions for recognition has increased, the available resources have decreased. Staff represents the vast majority of resources used by BIA to evaluate petitions and perform related administrative duties. The number of BIA staff assigned to evaluate petitions peaked in 1993 at 17. However, in the last 5 years, the number of staff has averaged less than 11, a decrease of more than 35 percent. BIA, responsible for a wide variety of programs for recognized tribes, faced overall funding cutbacks in the mid-1990s. Given the need for funding to provide services to currently recognized tribes, funding for staffing the recognition process was not as high a priority. As a result, BIA made no request for additional staff from fiscal years 1995 through 2000 and only requested one additional staff person for fiscal years 2001 and 2002. In contrast to other federal resources for recognition issues, less funding has been provided within BIA to process petitions than has been provided in federal grants to petitioning groups through a program administered by the Department of Health and Human Service’s Administration for Native American’s (ANA) program. In fiscal year 2000, estimated funding for BIA staff evaluating petitions and related costs was about $900,000, while funding for the ANA grants has averaged about $1.8 million a year for the last 9 years. While resources have not kept pace with workload, the process also lacks effective procedures for addressing the workload in a timely manner. The process lacks any real timelines that impose a sense of urgency on the process. There are no time frames established for petitioners to submit documentation with their letters of intent to petition. While BIA has received 250 petitions for recognition, many of these are only letters of intent, and in some instances, BIA has received nothing else in over 20 years. Even when documentation is submitted, BIA has no time frames to review it in order to provide technical assistance, nor is there any schedule for the initiation of active consideration. As a result, only 55 petitions have reached the stage where they are complete and ready for active consideration. Once active consideration begins, the regulations do establish timelines that, if met, would result in a final decision in approximately 2 years. However, these timelines for processing petitions are routinely extended because of BIA resource constraints and at the request of petitioners and third parties. BIA has completed active consideration for only 32 of the 55 petitions—with only 12 of 32 petitions completed within 2 years or less. Of the remaining 23 completed petitions, only 13 are currently active, with 10 more petitions waiting. All but 2 of the 13 currently active petitions have already been active for more than 2 years—2 of them longer than 10 years. Of the 10 petitions waiting active consideration, more than half have been waiting for over 5 years. Without any effective schedule for the process from the beginning to the end, it will become increasingly difficult for BIA to complete its assigned duties in evaluating petitions in a timely manner. While timelines have been extended for many reasons, including BIA resource constraints and requests by petitioners and third parties (upon showing good cause), BIA has no mechanism to balance the need for a thorough review of a petition with the need to complete the decision process. The decision process lacks effective timelines that create a sense of urgency to offset the desire to consider all information from all interested parties in the process. BIA has argued that it cannot guarantee timelines because it cannot predict future workload or behavior of petitioners and third parties. However, these decisions may be taken away from BIA as petitioners, frustrated by the length of time to process petitions, successfully gain court intervention that establishes scheduled timelines. At least one petitioner filed a lawsuit in federal court just to maintain its place in line. While each petition differs, BIA may look to the model offered in one lawsuit where all parties—petitioner, third parties, and BIA—agreed to a compromise schedule encouraged and endorsed by the court. On a broader level, BIA recently dropped one mechanism for creating a sense of urgency. In fiscal year 2000, BIA dropped its long-term goal to reduce the number of petitions actively being considered from its annual performance plan because the addition of new petitions would make this goal impossible to achieve. The Bureau did not replace it with another, more realistic goal, such as reducing the number of petitions on ready status or reducing the average time needed to process a petition once it is placed on active status. As third parties become more active in the recognition process, procedures for responding to their increased interest have not kept pace. Once BIA provides interested third parties the report summarizing the evidence, reasoning, and analysis behind a proposed finding, the parties have 180 days to submit arguments and evidence to rebut or support the proposed finding. However, based on the number of FOIA requests that BIA has received regarding recognition petitions, it appears that many parties believe this amount of time is insufficient. Third parties told us they wanted more detailed information earlier in the process so that they could fully understand a petition and effectively comment on its merits. However, there are no procedures for regularly providing third parties more detailed information. For example, while third parties are allowed to comment on the merits of a petition prior to a proposed finding, there is no mechanism to provide any information to third parties prior to the proposed finding. In contrast, petitioners are provided an opportunity to respond to any substantive comment received prior to the proposed finding. As a result, third parties are making FOIA requests for information on petitions much earlier in the process and often more than once in an attempt to obtain the latest documentation submitted. BIA has no procedures for efficiently responding to FOIA requests. Staff members hired as historians, genealogists, and anthropologists are pressed into service to copy the voluminous records of petitions in order to respond to FOIA requests. In addition, much of the information, particularly the information related to membership lists and the demonstration of descent, involves sensitive information subject to the protections of the Privacy Act. Therefore, all information must be reviewed and redacted, as appropriate, to ensure that sensitive information is not released. While additional resources to handle FOIA requests may help, improved procedures that address the elevated interest of third parties could alleviate some of the multiple FOIA requests that third parties view as their only means to meaningful participation in the process. Although the regulation-based recognition process was never intended to be the only way groups could receive federal recognition, it was intended to provide a clear, uniform, and objective approach for the Department of the Interior that established specific criteria and a process for evaluating groups seeking federal recognition. It is also the only avenue to federal recognition that has established criteria and a public process for determining whether groups meet the criteria. However, weaknesses in the process create uncertainty about the basis for recognition decisions, and the amount of time it takes to make those decisions impede the process from fulfilling its promise as a uniform approach to tribal recognition. Questions about the level of evidence required to meet the criteria and the basis for decisions reached will continue without more transparent guidance. In addition, the increasing amount of time involved in the process will continue to frustrate petitioners and third parties who have a great deal at stake in resolving tribal recognition cases. Without improvements that focus on fixing these problems, confidence in the regulatory process as an objective and efficient approach will erode. As a result, parties involved in tribal recognition may look outside of the regulatory process to the Congress or courts to resolve recognition issues, which has the potential to undermine the entire regulatory process. The end result could be that the resolution of tribal recognition cases will have less to do with the attributes and qualities of a group as an independent political entity deserving of a government-to-government relationship with the United States and more to do with the resources that petitioners and third parties can marshal to develop a successful political and legal strategy. To ensure more predictable and timely tribal recognition decisions, we recommend that the Secretary of the Interior direct BIA to: provide a clearer understanding of the basis used in recognition decisions by developing and using transparent guidelines that help interpret key aspects of the criteria and supporting evidence used in federal recognition decisions and develop a strategy that identifies how to improve the responsiveness of the process for federal recognition. This strategy should include a systematic assessment of the resources available and needed that leads to development of a budget commensurate with workload. We provided the Department of the Interior with a draft of this report. The Department generally agreed with our findings and recommendations and provided a plan for implementing our recommendations. These comments and the plan are reprinted in appendix IV. The Department also provided us with technical comments on the draft and we made corrections where appropriate. We conducted our work from October 2000 through September 2001 in accordance with generally accepted government auditing standards. Appendix V explains our methodology in detail. We are sending copies of this report to the Secretary of the Interior, the Assistant Secretary-Indian Affairs, and interested congressional committees. We will make copies available to others on request. If you or your staff have any questions on this report, please call me or Mark Gaffigan on (202) 512-3841. Key contributors are listed in appendix VI. The United States has recognized Indian tribes under a variety of circumstances. There are 556 tribes on the Bureau of Indian Affairs’ (BIA) most recent list of recognized tribes published in March 2000. Since then, another five tribes have been recognized, for a total of 561 federally recognized tribes. Although BIA only published its first list of recognized tribes in 1979, the federal government has “recognized” tribes since colonial times—although the term was not used until much later. In early American history, the government acknowledged such relationships through treaties and agreements with tribal governments. Recognition means that a tribe is formally recognized as a sovereign entity with a government-to-government relationship with the United States. The basic concept underlying Indian sovereignty is that it is not granted by the Congress but rather is an inherent status of the tribe that has never been lost or extinguished. Although all recognized tribes have the same sovereignty and political relationship with the United States regardless of the means by which they were recognized, why they are on the list, or how they got on the list, varies significantly. About 92 percent of the 561 currently recognized tribes either were part of the federal effort to reorganize and strengthen tribal governments in the 1930s or were part of a group of Alaskan tribes that were determined to have existing governmental relations with the United States when BIA’s first list of recognized tribes appeared in 1979. The remaining 8 percent— 47 tribes—were individually recognized between 1960 and the present by the Congress or the Department of the Interior. Of these, the Congress recognized 16 tribes and the Department of the Interior recognized 31 tribes. Of the 31 tribes that the Department of the Interior recognized, 14 were recognized through the BIA regulatory process established in 1978, 10 through administrative decisions before the regulatory process was established, and 7 through administrative decisions after the regulatory process was established and outside of the process. There are 292 tribes on the current list of recognized tribes that can trace their federal recognition at least back to the era of the Indian Reorganization Act of 1934 (IRA) and related laws. These laws helped define and create the tribal governments that exist today. Tribal governments had been severely weakened by earlier federal Indian policy. In 1830, the federal government formally established the removal policy of exchanging federal lands west of the Mississippi for lands held by Indian tribes in the east and eventually developed a system of reservations to house them. In the ensuing dispersion, many tribes wound up splintered among two or more reservations or placed with other tribes on a single reservation. Then, beginning in the 1880s, federal Indian policy shifted to emphasize the assimilation of Indians into mainstream cultures by dividing reservation land into individual allotments, terminating historical tribal governments, and suppressing Indian customs and tribal laws. In the 1920s, federal Indian policy shifted once more; this time away from isolationism and assimilation and toward tribal self-governance, culminating in IRA. IRA established a process to form stronger tribal governments and terminated the federal policy of breaking up reservations. Tribes on reservations were granted authority to reorganize their governments and adopt a constitution, and groups of tribes residing on the same reservation could reorganize into a single tribe by adopting a constitution. The act, however, does not apply to any reservation where the majority of adult Indians, in a special election called by the Secretary of the Interior, voted against it. In calling these elections, the Secretary of the Interior made determinations that, in effect, recognized a particular group of Indians as a tribe. In making these determinations, the Secretary considered whether the group had existing treaty relations with the United States or had been designated a tribe by an act of the Congress or an executive order; had been treated as having collective rights in tribal lands or funds, even though not expressly designated as a tribe; had been treated as a tribe by other tribes; and had exercised political authority over its members by a tribal council or other form of government. The Secretary also considered factors of lesser importance, such as the existence of special appropriation items for the group and the social solidarity of the group. In addition to these tribes, known as historic tribes, IRA allowed Indians without a common tribal affiliation to organize into tribes. Indian residents of a reservation at the time the act was passed could organize as a tribe by adopting a constitution. Also, groups of Indians who were not residents of a reservation yet whose members were one-half or more Indian blood were permitted to organize under the act if the Secretary of the Interior established a reservation for them. For a brief period, federal Indian policy reverted to assimilation during the 1950s and 1960s . As a result of legislation during this time, the political relationship with some tribes was terminated. Termination by the Congress, however, did not terminate the tribes’ existence, but only the U.S. government’s relation with the tribes. While the Congress and federal courts restored federal recognition to 37 of these terminated tribes—the most recent in December 2000—relations with many other terminated tribes were not restored. Because the Congress terminated these tribes, the tribes are not eligible to be recognized through the regulatory process. The names of 222 Alaskan tribes now appear on BIA’s current list of recognized tribes. These were determined to have governmental relations with the United States at the time the first list was published in 1979. However, these tribes were not included in the first list because they were not completely identified and their status remained uncertain until 1993. According to one Department official involved in developing the first list, Alaskan tribes were not included in the list because of errors in the list and confusion over the political status of Alaskan tribes created by provisions of a 1936 amendment to IRA, which instructed most Alaskan tribes to be brought under the act. In 1993, the Department of the Interior’s Office of the Solicitor issued a comprehensive opinion analyzing the status of Alaskan tribes and determined that they were tribes in the same sense as tribes in the contiguous 48 states. BIA then identified 222 Alaskan tribes and included them on the list of recognized tribes published in October 1993. The remaining 47 tribes have been individually recognized since 1960 (see table 3 at the end of this appendix). The Congress has recognized 16 of these tribes through legislation. Although the Congress’s power to recognize a group as a tribe is not unlimited, it is loosely defined. The Supreme Court ruled in United States v. Sandoval that the Congress may not arbitrarily recognize a group or a community as a tribe. However, the only practical limitations upon congressional decisions as to tribal existence are the broad requirements that (1) the group have some ancestors who lived in what is now the United States before discovery by Europeans and (2) the group be a “people distinct from others.” In some instances, the Congress recognized tribes as part of a land settlement claim in New England. In other instances, groups that had been previously considered part of an already recognized tribe were recognized as a separate tribe. In still other cases, the Congress simply granted recognition. According to Department officials, the underlying position of the administration has always been that the executive branch can correct mistakes and oversights regarding which groups the federal government recognizes as Indian tribes but cannot create new tribes. The essential prerequisite for recognition is the tribe’s continuous existence as a political entity since a time when the federal government broadly acknowledged a political relationship with all Indian tribes. The regulatory process was established to recognize tribes whose relationship with the United States had either lapsed or never been established. Tribes recognized through the regulatory process had to provide evidence that they satisfied the seven criteria, including that the tribe has continually existed from historical times to the present and that its members descended from a historic tribe. The Department of the Interior has individually recognized a total of 31 tribes. Of these, 14 tribes were recognized through the BIA regulatory process and 17 outside of the regulatory process through administrative decisions—10 before the regulatory process was established and 7 after it was established. Of the seven tribes recognized outside the regulatory process established in 1978, one had its continuous existence as a federally recognized tribe substantiated just months after the regulatory process was established; one was established as a “half-blood community” as defined under provisions of IRA; one was reclassified as an independent tribe that previously had been dealt with as part of another recognized tribe; and one was recognized because land had been taken in trust on its behalf, indicating that it had a political relationship with the United States. In the three other instances, the Assistant Secretary recently “reaffirmed” the tribes’ federal recognition, ruling that their historical political relationship with the United States had not lapsed, citing a BIA administrative error that caused the names of the tribes not to be placed on the list of recognized tribes. Members of the BIA staff responsible for implementing the BIA regulatory process for recognizing tribes took issue with the Assistant Secretary’s three recent “reaffirmations” because of factual concerns about the groups that were to be recognized and because the decisions were recognitions outside of the regulatory process. In particular, they thought that the groups should have gone through the regulatory process because the regulations provided for a review of groups that had previously been unambiguously recognized but whose present status was now uncertain. The regulatory process used by BIA to determine a group’s eligibility for tribal recognition is listed in the Federal Register. The regulatory process, which is based on regulations that were originally promulgated in 1978 and revised in 1994, is summarized in table 4. BIA has received 250 petitions through its regulatory process. Forty of these petitions were requests for recognition made before the inception of the process in October 1978. There has been a general increase in the number of petitions received per year since the passage of the Indian Gaming Regulatory Act in 1988, which regulates Indian gambling, as shown in figure 2. BIA classifies petitions for tribal recognition in three categories: not ready for evaluation (because of incomplete documentation), ready for evaluation, and resolved. Of the 250 petitions that BIA has received, 175 are not ready to be evaluated, and of these, at least 60 are more than 10 years old. Another 20 have been resolved outside the regulatory process, either through congressional or Department of the Interior action or through the action of the petitioner—such as withdrawing from the process or merging with another petitioner. Of the remaining 55 petitions, 23 petitions are ready to be evaluated or are actively being evaluated, and 32 petitions have completed the process, although the final outcome of 3 of these petitions is pending. The Interior Board of Indian Appeals has sent two petitions back to the Secretary to determine whether they should be reconsidered, and a final determination is pending for the third. The status of all petitions is summarized in table 5. The Indian gambling industry, a relatively new phenomenon, traces its genesis back to the late 1970s when a number of Indian tribes established bingo operations as a supplemental means of funding tribal operations. At about the same time, a number of state governments also began exploring the potential for increasing state revenues through state-sponsored gambling. By the mid 1980s, a number of states had authorized charitable gambling and some sponsored state-run lotteries. However, tribal and state governments soon found themselves at odds over whether tribal governments had the authority to conduct gambling independently of state regulation. Although many lower courts upheld the tribal position, the matter was not resolved until 1987 when the U. S. Supreme Court issued its decision in California v. Cabazon Band of Mission Indians. That decision confirmed the authority of tribes to establish gambling operations on their reservations outside state regulation—provided the affected state permitted some type of gambling. At about the same time the Cabazon case was being litigated, there was a widespread increase of Indian bingo halls in many parts of the country. In response to state concerns that Indian gambling would present an attractive target for organized crime, the Congress took up the issue and passed legislation—the Indian Gaming Regulatory Act (IGRA) in 1988—which was a compromise between Indian and state interests. Since IGRA, Indian gambling has grown to include 193 tribes with over 300 facilities that generated close to $10 billion in revenue. With the passage of IGRA in 1988, the Congress established the jurisdictional framework that would govern Indian gambling. IGRA established a comprehensive system for regulating gambling activities on Indian lands. IGRA established the following three classes of gambling to be regulated by a combination of tribal governments, state governments, BIA, and the National Indian Gaming Commission (NIGC). Class I gambling consists of social gambling for minimal prizes or ceremonial gambling. It is regulated solely by the tribe, and no financial reporting to other authorities is required. Class II gambling consists of gambling pull-tabs, bingo-like games, and punch boards. A tribe may conduct, license, and regulate Class II gambling if (1) the state in which the tribe is located permits such gambling for any purpose by a person or organization and (2) the tribal governing body adopts a gambling ordinance that is approved by NIGC. Class III gambling consists of all other forms of gambling, including casino games, slot machines, and pari-mutuel betting. Generally, Class III gambling is often referred to as full-scale casino-style gambling. Class III games are regulated as indicated below. Class III gambling is only allowed in states that permit similar types of gambling. However, class III gambling has been broadly defined under IGRA. For example, the allowance of charitable Las Vegas nights and state-run lotteries has sufficed to allow tribes to operate casinos. IGRA also requires that states and tribes negotiate a tribal-state compact to balance the interests of both the state and the tribe. The tribal-state compact is an agreement that may include provisions concerning standards for the operation and maintenance of the gambling facility, the application of laws and regulations of the tribe or state that are related to the licensing and regulation of the gambling activity, and the assessment by the state of amounts necessary to defray the costs of regulating the gambling activity. The Secretary of the Interior must approve any tribal- state compact and has delegated this authority to the Assistant Secretary- Indian Affairs. As of July 6, 2000, 24 states had negotiated 267 compacts with 212 Indian tribes. Tribes may have compacts with more than one state, and they may also have more than one compact for different types of games. Thirty-seven tribes had compacts without any operating gambling facilities. IGRA also authorizes NIGC to oversee and regulate Indian gambling activities. NIGC’s mission is to provide fair and consistent enforcement of IGRA requirements to ensure the integrity of Indian gambling operations. Among its responsibilities, NIGC reviews tribal investigations of key gambling employees and management officials and approves tribal gambling ordinances. Additionally, all Class II and Class III gambling operations are required to submit copies of their annual financial statement audits to NIGC. Although the Congress intended regulatory issues to be addressed in tribal-state compacts, it left a number of key functions in federal hands, including approval authority over compacts, management contracts, and tribal ordinances. IGRA specifies that the tribal ordinance concerning the conduct of Class II or Class III gambling on Indian lands within the tribe’s jurisdiction must provide that the net revenues from any tribal gambling are not to be used for purposes other than to (1) fund tribal government operations or programs, (2) provide for the general welfare of the Indian tribe and its members, (3) promote tribal economic development, (4) donate to charitable organizations, or (5) help fund operations of local government agencies. A tribe may distribute a portion of its net revenues directly to tribal members, provided that the tribe has a revenue allocation plan approved by BIA. This plan should describe how the tribe intends to allocate net revenues among various governmental, educational, and charitable projects, including direct payments to tribal members. Gambling revenues generated by federally recognized tribes and their federally chartered corporations are not subject to federal income tax. The Internal Revenue Service (IRS) has determined that tribes are political agencies that the Congress did not intend to include within the meaning of the income tax provisions of the Internal Revenue Code. Any income earned by a tribe is not subject to federal income tax, regardless of whether the business activity takes place inside or outside of Indian- owned lands. On the other hand, IRS has found that individual tribal members, like all U.S. citizens, must pay federal income tax unless a specific exemption can be found in a treaty or statute. In some cases, an individual tribal member may receive general welfare payments from the tribe. Although amounts paid for general welfare may not be taxable, payments made pro rata to all tribal members are evidence that the payments are not based on need and, thus, probably will not qualify for the general welfare exclusion, according to IRS. IGRA provides that net revenues from gambling may be used to make per capita payments to members of the Indian tribe, but only if the tribe has prepared a revenue allocation plan to distribute revenues to uses authorized by IGRA. The plan must be approved by the Secretary of the Interior as adequate, especially funding for tribal government operations and promoting tribal economic development. IGRA also requires the protection and preservation of the interests of minors who are entitled to receive any of the payments. Because the payments are per capita distributions of gambling proceeds, they are generally subject to taxation. Since the passage of IGRA in 1988, Indian gambling revenues have grown 60 fold—from $171 million in 1988 to $9.8 billion in 1999 (see fig. 3). However, a few tribes generated most of the revenues. Although 193 tribes have Class II or Class III gambling facilities, NIGC reports that just 27 tribes are responsible for generating more than $6.4 billion, or more than 65 percent, of the total $9.8 billion in revenues that tribes reported in 1999. Although Indian gambling is a relatively new phenomenon, most of the 193 tribes with Class II or Class III gambling facilities can trace their existence back to the era of the Indian Reorganization Act of 1934. (See app. I for additional information on how tribes were recognized.) Almost all of the remaining tribes with Class II or Class III facilities had been individually recognized since 1960. Two tribes were recognized as part of a large group of Alaskan tribes fully identified in 1993 (see table 6). As of May 15, 2001, there were about 313 Indian gambling facilities in operation. Of this number, 234 facilities conducted some form of Class III gambling, often in conjunction with Class II gambling. The remaining 79 facilities conducted only Class II gambling. Figure 4 shows the distribution of facilities with Class III gambling by state. As shown in figure 5, Indian gambling has become a nationwide business, having operations in 23 states, with the heaviest concentration in the West and Midwest. In 1999, the Indian gambling industry generated $9.8 billion while Nevada’s and Atlantic City’s casinos reported revenues of about $9 billion and $4.2 billion, respectively, for the same period. IGRA requires states to negotiate in good faith with Indian tribes when forming gambling compacts. In cases where a tribe believes that the state has not negotiated in good faith, IGRA authorizes the tribe to bring suit in federal district court. If the court finds that the state has indeed failed to negotiate in good faith, the court may order the state to conclude a compact in 60 days. However, in a case decided by the U.S. Supreme Court in March 1996, Seminole Tribe of Florida v. Florida, the Court held that the Congress did not have the constitutional authority to make the state subject to suit in federal court and that a state could assert an Eleventh Amendment immunity defense to avoid a lawsuit brought by the tribe. The Seminole Tribe decision did not address the issue of whether a state could effectively prevent casino-type gambling within its borders by refusing to negotiate in good faith and asserting sovereign immunity if the tribe sues. Also, the Supreme Court expressed no opinion on a substitute remedy for a tribe bringing suit. To prevent a stalemate in tribal-state compacts, the Department of the Interior issued a regulation on April 12, 1999, for dealing with tribal-state compacts when states and tribes cannot reach an agreement. The regulation prescribes alternative procedures to establish Class III gambling when a state does not waive its Eleventh Amendment immunity from a lawsuit. The regulation authorizes the tribe to submit a proposal to the Department to establish gambling procedures. The Department must notify the state of the tribe’s request and solicit the state’s comments on the tribe’s proposed procedures, including any comments on the proposed scope of gambling. The state is invited to submit alternative proposed procedures. Based on its review of the proposed submissions, the Assistant Secretary-Indian Affairs may approve the tribe’s proposal or convene an informal conference with the state and the tribe to resolve any areas of disagreement. The states of Alabama, Florida, and Kansas have filed suit challenging the new regulation. As of September 2001, these cases were pending in federal court. In this report, we describe the significance of federal tribal recognition, including information on Indian gambling; evaluate the BIA’s regulatory recognition process; and provide a historical overview of how tribes have been recognized. In describing the significance of federally recognizing Indian tribes, we spoke with and obtained documents from BIA, the Department of Health and Human Service’s Indian Health Service, and the National Indian Gaming Commission. We also analyzed pertinent legislation and other documents. Because the revenue collected from gambling by Indian tribes is proprietary information, NIGC did not provide us with any tribe-specific information. Instead, it summarized the revenue information before providing it to us. In evaluating the BIA regulatory process, we spoke with BIA and other Department of the Interior officials familiar with the process, including the former Assistant Secretary-Indian Affairs, the former Deputy Assistant Secretary-Indian Affairs, representatives for the Department’s Office of the Solicitor, and officials from BIA’s Branch of Acknowledgment and Research, who are responsible for implementing the regulatory process. We also analyzed BIA records on how it processes petitions for recognition. We did not, however, evaluate the merits of individual tribes’ petitions or the decisions regarding those petitions. We also spoke with tribal leaders who are current petitioners or who have completed the process, experts in Indian law and the recognition process, and representatives of state and local governments affected by tribal recognition to obtain their views of the recognition process. In determining how tribes became federally recognized, we analyzed BIA and Department of the Interior records regarding the implementation of the Indian Reorganization Act of 1934 to identify tribes recognized at that point in time or created by that act during the early years of its implementation. We also analyzed other BIA and Department of the Interior records, as well as legislation and related documentation, to determine how other tribes became recognized. In some instances, we spoke with BIA and Department officials who played a direct role in a tribe’s recognition. We performed our work from October 2000 through September 2001 in accordance with generally accepted government auditing standards. In addition to the above named, Charles T. Egan, Robert Crystal, Jeffery Malcolm, and John Yakaitis made key contributions to this report.
The Indian gambling industry has flourished since the enactment of the Indian Gaming Regulatory Act in 1988. Nearly 200 tribes generated about $10 billion in annual revenues in 1999 from their gambling operations. Because of weaknesses in the federal recognition process, the basis for tribal recognition decisions by the Bureau of Indian Affairs (BIA) is not always clear and the length of time involved can be substantial. Despite an increasing workload, the number of BIA staff assigned to evaluate the petitions has fallen by about 35 percent since 1993. Just as important, the process lacks effective procedures for promptly addressing the increased workload. In particular, the process does not impose effective deadlines that create a sense of urgency, and procedures for providing information to interested third parties are ineffective. GAO summarized this report in testimony before Congress; see: Indian Issues: More Consistent and Timely Tribal Recognition Process Needed, by Barry T. Hill, Director for Natural Resources and Environment, before the Subcommittee on Energy Policy, Natural Resources and Regulatory Affairs, House Committee on Government Reform. GAO-01-415T , Feb. 7 (nine pages).
As part of its ongoing business systems modernization program, and consistent with our past recommendation, DOD has created an inventory of its existing and new business system investments. As of October 2002, DOD reported that this inventory consisted of 1,731 systems and system acquisition projects across DOD’s functional areas. In particular, DOD reported that it had 374 separate systems to support its civilian and military personnel function, 335 systems to perform finance and accounting functions, and 221 systems that support inventory management. Table 1 presents the composition of DOD business systems by functional area. As we have previously reported, this systems environment is not the result of a systematic and coordinated departmentwide strategy, but rather is the product of unrelated, stovepiped initiatives to support a set of business operations that are nonstandard and duplicative across DOD components. Consequently, DOD’s amalgamation of systems is characterized by (1) multiple systems performing the same tasks; (2) the same data stored in multiple systems; (3) manual data entry and reentry into multiple systems; and (4) extensive data translations and interfaces, each of which increases costs and limits data integrity. Further, as we have reported, these systems do not produce reliable financial data to support managerial decisionmaking and ensure accountability. To the department’s credit, it recognizes the need to eliminate as many systems as possible and integrate and standardize those that remain. In fact, three of the four Defense Finance and Accounting Service (DFAS) projects that are the subject of the report being released today were collectively intended to reduce or eliminate all or part of 17 different systems that perform similar functions. For example, the Defense Procurement Payment System (DPPS) was intended to consolidate eight contract and vendor pay systems; the Defense Departmental Reporting System (DDRS) is intended to reduce the number of departmental financial reporting systems from seven to one; and the Defense Standard Disbursing System (DSDS) is intended to eliminate four different disbursing systems. The fourth system, the DFAS Corporate Database/Corporate Warehouse (DCD/DCW), is intended to serve as the single DFAS data store, meaning it would contain all DOD financial information required by DFAS and be the central point for all shared data within DFAS. For fiscal year 2003, DOD has requested approximately $26 billion in IT funding to support a wide range of military operations and business functions. This $26 billion is spread across the military services and defense agencies—each receiving its own allocation of IT funding. The $26 billion supports three categories of IT—business systems, business systems infrastructure, and national security systems—the first two of which comprise the earlier cited 1,731 new and existing business systems projects. At last year’s hearing, DOD was asked about the makeup of its $26 billion in IT funding, including what amounts relate to business systems and related infrastructure, at which time answers were unavailable. As we are providing in the report being released today and as shown in figure 1, approximately $18 billion—about $5.2 billion for business systems and $12.8 billion for business systems infrastructure—relates to the operation, maintenance, and modernization of the 1,731 business systems that DOD reported having in October 2002. Figure 2 provides the allocation of DOD’s business systems modernization budget for fiscal year 2003 budget by component. However, recognizing the need to modernize and making funds available are not sufficient for improving DOD’s current systems environment. Our research of successful modernization programs in public and private- sector organizations, as well as our reviews of these programs in various federal agencies, has identified a number of IT disciplines that are necessary for successful modernization. These disciplines include having and implementing (1) an enterprise architecture to guide and constrain systems investments; (2) an investment management process to ensure that systems are invested in incrementally, are aligned with the enterprise architecture, and are justified on the basis of cost, benefits, and risks; and (3) a project oversight process to ensure that project commitments are being met and that needed corrective action is taken. These institutionalized disciplines have been long missing at DOD, and their absence is a primary reason for the system environment described above. The future of DOD’s business systems modernization is fraught with risk, in part because of longstanding and pervasive modernization management weaknesses. As we have reported, these weaknesses include (1) lack of an enterprise architecture; (2) inadequate institutional and project-level investment management processes; and (3) limited oversight of projects’ delivery of promised system capabilities and benefits on time and within budget. To DOD’s credit, it recognizes the need to address each of these weaknesses and has committed to doing so. Effectively managing a large and complex endeavor requires, among other things, a well-defined and enforced blueprint for operational and technological change, commonly referred to as an enterprise architecture. Developing, maintaining, and using architectures is a leading practice in engineering both individual systems and entire enterprises. Government- wide requirements for having and using architectures to guide and constrain IT investment decisionmaking are also addressed in federal law and guidance. Our experience has shown that attempting a major systems modernization program without a complete and enforceable enterprise architecture results in systems that are duplicative, are not well integrated, are unnecessarily costly to maintain and interface, do not ensure basic financial accountability, and do not effectively optimize mission performance. In May 2001, we reported that DOD had neither an enterprise architecture for its financial and financial-related business operations nor the management structure, processes, and controls in place to effectively develop and implement one. Further, we stated that DOD’s plans to continue spending billions of dollars on new and modified systems independently from one another, and outside the context of a departmental modernization blueprint, would result in more systems that are duplicative, noninteroperable, and unnecessarily costly to maintain and interface; moreover, they would not address longstanding financial management problems. To assist the department, we provided a set of recommendations on how DOD should approach developing its enterprise architecture. In September 2002, the Secretary of Defense designated improving financial management operations (including such business areas as logistics, acquisition, and personnel management) as one of the department’s top 10 priorities. In addition, the Secretary established a program to develop an enterprise architecture, and DOD plans to have the architecture developed by May 2003. Subsequently, the National Defense Authorization Act for Fiscal Year 2003 directed DOD to develop, by May 1, 2003, an enterprise architecture, including a transition plan for its implementation. The act also defined the scope and content of the enterprise architecture and directed us to submit to congressional defense committees an assessment of DOD’s actions to develop the architecture and transition plan no later than 60 days after their approval. Finally, the act prohibited DOD from obligating more than $1 million on any financial systems improvement until the DOD comptroller makes a determination regarding the necessity or suitability of such an investment. In our February 2003 report on DOD enterprise architecture efforts, we stated our support for the Secretary’s decision to develop the architecture and recognized that DOD’s architecture plans were challenging and ambitious. However, we also stated that despite taking a number of positive steps toward its architecture goals, such as establishing a program office responsible for managing the enterprise architecture, the department had yet to implement several key recommendations and certain leading practices for developing and implementing architectures. For example, DOD had yet to (1) establish the requisite architecture development governance structure needed to ensure that ownership of and accountability for the architecture is vested with senior leaders across the department; (2) develop and implement a strategy to effectively communicate the purpose and scope, approach to, and roles and responsibilities of stakeholders in developing the enterprise architecture; and (3) fully define and implement an independent quality assurance process. We concluded that not implementing these recommendations and practices increased DOD’s risk of developing an architecture that would be limited in scope, would be resisted by those responsible for implementing it, and would not support effective systems modernization. To assist the department, we made additional recommendations with which DOD agreed. We plan to continue reviewing DOD’s efforts to develop and implement this architecture pursuant to our mandate under the fiscal year 2003 defense authorization act. The Clinger-Cohen Act, federal guidance, and recognized best practices provide a framework for organizations to follow to effectively manage their IT investments. Collectively, this framework addresses IT investment management at the institutional or corporate level, as well as the individual project or system level. The former involves having a single, corporate approach governing how the organization’s portfolio of IT investments is selected, controlled, and evaluated across its various components, including assuring that each investment is aligned with the organization’s enterprise architecture. The latter involves having a system/project-specific investment approach that provides for making investment decisions incrementally and ensuring that these decisions are economically justified on the basis of current and credible analyses. Corporate investment management approach: DOD has yet to establish and implement an effective departmentwide approach to managing its business systems investment portfolio. In May 2001, we reported that DOD did not have a departmentwide IT investment management process through which to assure that its enterprise architecture, once developed, could be effectively implemented. We therefore recommended that DOD establish a system investment selection and control process that treats compliance with the architecture as an explicit condition to meet at key decision points in the system’s life cycle and that can be waived only if justified by compelling written analysis. Subsequently, in February 2003, we reported that DOD had not yet established the necessary departmental investment management structure and process controls needed to adequately align ongoing investments with its architectural goals and direction. Instead, the department continued to allow its component organizations to make their own parochial investment decisions, following different approaches and criteria. In particular, DOD had not established and applied common investment criteria to its ongoing IT system projects using a hierarchy of investment review and funding decisionmaking bodies, each composed of representatives from across the department. DOD also had not yet conducted a comprehensive review of its ongoing IT investments to ensure that they were consistent with its architecture development efforts. We concluded that until it takes these steps, DOD will likely continue to lack effective control over the billions of dollars it is currently spending on IT projects. To address this, we recommended that DOD create a departmentwide investment review board with the responsibility and authority to (1) select and control all DOD financial management investments and (2) ensure that its investment decisions treat compliance with the financial management enterprise architecture as an explicit condition for investment approval that can be waived only if justified by a compelling written analysis. DOD concurred with our recommendations and is taking steps to address them. Project/system-specific investment management: DOD has yet to ensure that its investments in all individual systems or projects are economically justified and that it is investing in each incrementally. In particular, none of the four DFAS projects addressed in the report being issued today had current and reliable economic justifications to demonstrate that they would produce value commensurate with the costs and risks being incurred. For example, we found that although DCD was initiated to contain all DOD financial data required by DFAS systems, planned DCD capabilities had since been drastically reduced. Despite this, DFAS planned to continue investing in DCD/DCW without having an economic justification showing whether its revised plans were cost effective. Moreover, DOD planned to continue investing in the three other projects even though none had current economic analyses that reflected material changes to costs, schedules, and/or expected benefits since the projects’ inception. For example, the economic analysis for DSDS had not been updated to reflect material changes in the project, such as changing the date for full operational capability from February 2003 to December 2005—a schedule change of almost 3 years that affected delivery of promised benefits. Similarly, the DPPS economic analysis had not been updated to recognize an estimated cost increase of $274 million and schedule slip of almost 4 years. After recently reviewing this project’s change in circumstances, the DOD Comptroller terminated DPPS after 7 years of effort and an investment of over $126 million, citing poor program performance and increasing costs. Table 2 highlights the four projects’ estimated cost increases and schedule delays. Our work on other DOD projects has shown a similar absence of current and reliable economic justification for further system investment. For example, we reported that DOD’s ongoing and planned investment in its Standard Procurement System (SPS) was based on an outdated and unreliable economic analysis, and even this flawed analysis did not show that the system was cost beneficial, as defined. As a result, we recommended that investment in future releases or major enhancements to the system be made conditional on the department’s first demonstrating that the system was producing benefits that exceeded costs and that future investment decisions be made on the basis of complete and reliable economic justifications. DOD is currently in the process of addressing this recommendation. Beyond not having current and reliable economic analyses for its projects, DOD has yet to adopt an incremental approach to economically justifying and investing in all system projects. For example, we have reported that although DOD had divided its multiyear, billion-dollar SPS project into a series of incremental releases, it had not treated each of these increments as a separate investment decision. Such an incremental approach to system investment helps to prevent discovering too late that a given project is not cost beneficial. However, rather than adopt an incremental approach to SPS investment management, the department chose to treat investment in SPS as one, monolithic investment decision, justified by a single, all-or-nothing economic analysis. This approach to investing in large systems, like SPS, has proven ineffective in other federal agencies, resulting in huge sums being invested in systems that do not provide commensurate value, and thus has been abandoned by successful organizations. We also recently reported that while DOD’s Composite Health Care System II had been structured into a series of seven increments (releases), the department had not treated the releases to date as separate investment decisions supported by incremental economic justification. In response to our recommendations, DOD committed to changing its strategy for future releases to include economically justifying each release before investing in and verifying each release’s benefits and costs after deployment. The Clinger-Cohen Act of 1996 and federal guidance emphasize the need to ensure that IT projects are being implemented at acceptable costs and within reasonable and expected timeframes and that they are contributing to tangible, observable improvements in mission performance (that is, that projects are meeting the cost, schedule, and performance commitments upon which their approval was justified). They also emphasize the need to regularly determine each project’s progress toward expectations and commitments and to take appropriate action to address deviations. Our work on specific DOD projects has shown that such oversight does not always occur, a multi-example case in point being the four DFAS accounting system projects that are the subject of our report being released today. For these four projects, oversight responsibility was shared by the DOD comptroller, DFAS, and the DOD chief information officer (CIO). However, these oversight authorities have not ensured, in each case, that the requisite analytical basis for making informed investment decisions was prepared. Moreover, they have not regularly monitored system progress toward expectations so that timely action could have been taken to correct deviations, even though each case had experienced significant cost increases and schedule delays (see table 2). Their respective oversight activities are summarized below: DOD Comptroller—Oversight responsibility for DFAS activities, including system investments, rests with the DOD Comptroller. However, DOD Comptroller officials were not only unaware of cost increases and schedule delays on these four projects, they also told us that they do not review DFAS system investments to ensure that they are meeting cost, schedule, and performance commitments because this is DFAS’s responsibility. DFAS—This DOD agency has established an investment committee to, among other things, oversee its system investments. However, the committee could not provide us with any evidence demonstrating meaningful oversight of these four projects, nor could it provide us with any guidance describing the committee’s role, responsibilities, and authorities, and how it oversees projects. DOD CIO—Oversight of the department’s “major” IT projects, of which two of the four DFAS projects (DCD/DCW and DPPS) qualify, is the responsibility of DOD’s CIO. However, this organization did not adequately fulfill this responsibility on either project because, according to DOD CIO officials, they have little practical authority in influencing component agency-funded IT projects. Thus, the bad news is that these three oversight authorities have jointly permitted approximately $316 million to be spent on the four accounting system projects without knowing if material changes to the projects’ scopes, costs, benefits, and risks warranted continued investment. The good news is that the DOD Comptroller recently terminated one of the four (DPPS), thereby avoiding throwing good money after bad, and DOD has agreed to implement the recommendations contained in our report released today, which calls for DOD to demonstrate that the remaining three projects will produce benefits that exceed costs before further investing in each. Our work on other DOD projects has shown similar voids in oversight. For example, we reported that SPS’s full implementation date slipped by 3 ½ years, with further delays expected, and the system’s life-cycle costs grew by 23 percent, from $3 billion to $3.7 billion. However, none of the oversight authorities responsible for this project, including the DOD CIO, had required that the economic analysis be updated to reflect these changes and thereby provide a basis for informed decisionmaking on the project’s future. To address this issue, we recommended, among other things, that the lines of oversight responsibility and accountability of the project be clarified and that further investment in SPS be limited until such investment could be justified. DOD has taken steps to address some of our recommendations. For example, it has clarified organizational accountability and responsibility for the program. However, much remains to be done before the department will be able to make informed, data- driven decisions about whether further investment in the system is justified. We have made numerous recommendations to DOD that collectively provide a valuable roadmap for improvement as the department attempts to create the management infrastructure needed to effectively undertake a massive business systems modernization program. This collection of recommendations is not without precedent, as we have provided similar ones to other federal agencies, such as the Federal Aviation Administration, the Internal Revenue Service, and the former U.S. Customs Service, to aid them in building their respective capacities for managing modernization programs. In cases where these recommendations have been implemented properly, we have observed improved modernization management and accountability. Our framework for DOD provides for developing a well-defined and enforceable DOD-wide enterprise architecture to guide and constrain the department’s business system investments, including specific recommendations for successfully accomplishing this, such as creating an enterprise architecture executive committee whose members are singularly and collectively responsible and accountable for delivery and approval of the architecture and a proactive enterprise architecture marketing and communication program to facilitate stakeholder understanding, buy-in, and commitment to the architecture. Our recommendations also provide for establishing a DOD-wide investment decisionmaking structure that consists of a hierarchy of investment boards that are responsible for ensuring that projects meet defined threshold criteria and for reviewing and deciding on projects’ futures on the basis of a standard set of investment criteria, two of which are alignment with the enterprise architecture and return on investment. In addition, our recommendations include ensuring that return on investment is analytically supported by current and reliable economic analyses showing that benefits are commensurate with costs and risks, and that these analyses and associated investment decisions cover incremental parts of each system investment, rather than treating the system as one, all-or-nothing, monolithic pursuit. Further, our recommendations provide clear and explicit lines of accountability for project oversight and continuous monitoring and reporting of progress against commitments to ensure that promised system capabilities and benefits are being delivered on time and within budget.
The Department of Defense's (DOD) management of its business systems modernization program has been an area of longstanding concern to Congress and one that GAO has designated as high risk since 1995. Because of this concern, GAO was requested to testify on (1) DOD's current inventory of existing and new business systems and the amount of funding devoted to this inventory; (2) DOD's modernization management capabilities, including weaknesses and DOD's efforts to address them; and (3) GAO's collective recommendations for correcting these weaknesses and minimizing DOD's exposure to risk until they are corrected. In developing this testimony, GAO drew from its previously issued reports on DOD's business systems modernization efforts, including one released today on four key Defense Finance and Accounting Service (DFAS) projects. As of October 2002, DOD reported that its business systems environment consisted of 1,731 systems and system acquisition projects spanning about 18 functional areas. This environment is the product of unrelated, stovepiped initiatives supporting nonstandard, duplicative business operations across DOD components. For fiscal year 2003, about $18 billion of DOD's IT funding relates to operating, maintaining, and modernizing these nonintegrated systems. To DOD's credit, it recognizes the need to modernize, eliminating as many of these systems as possible. The future of DOD's business systems modernization is fraught with risk because of longstanding and pervasive modernization weaknesses, three of which are discussed below. GAO's report on four DFAS systems highlights some of these weaknesses, and GAO's prior reports have identified the others. DOD has stated its commitment to addressing each and has efforts under way that are intended to do so. Lack of departmentwide enterprise architecture: DOD does not yet have an architecture, or blueprint, to guide and constrain its business system investments across the department. Nevertheless, DOD continues to spend billions of dollars on new and modified systems based the parochial needs and strategic direction of its component organizations. This will continue to result in systems that are duplicative, are not integrated, are unnecessarily costly to maintain and interface, and will not adequately address longstanding financial management problems. Lack of effective investment management: DOD does not yet have an effective approach to consistently selecting and controlling its investments as a portfolio of competing department options and within the context of an enterprise architecture. DOD is also not ensuring that it invests in each system incrementally and on the basis of reliable economic justification. For example, for the four DFAS projects, DOD spent millions of dollars without knowing whether the projects would produce value commensurate with costs and risks. Thus far, this has resulted in the termination of one of the projects after about $126 million and 7 years of effort was spent. Lack of effective oversight: DOD has not consistently overseen its system projects to ensure that they are delivering promised system capabilities and benefits on time and within budget. For example, for the four DFAS projects, oversight responsibility is shared by the DOD Comptroller, DFAS, and the DOD chief information officer. However, these oversight authorities have largely allowed the four to proceed unabated, even though each was experiencing significant cost increases, schedule delays, and/or capability and scope reductions and none were supported by adequate economic justification. As a result, DOD invested approximately $316 million in four projects that may not resolve the very financial management weaknesses that they were initiated to address.
With the passage of the Aviation and Transportation Security Act (ATSA) in November 2001, TSA assumed responsibility for civil aviation security from the Federal Aviation Administration and for passenger and checked baggage screening from air carriers. As part of this responsibility, TSA oversees security operations at the nation’s more than 400 commercial airports, including establishing requirements for passenger and checked baggage screening and ensuring the security of air cargo transported to, from, and within the United States. In addition, TSA has operational responsibility for conducting passenger and checked baggage screening at most airports, and has regulatory, or oversight, responsibility, for air carriers who conduct air cargo screening. While TSA took over responsibility for passenger checkpoint and baggage screening, air carriers have continued to conduct passenger watch-list matching in accordance with TSA requirements, which includes the process of matching passenger information against federal watch-list data before flights depart. TSA is currently developing a program to take over this responsibility from air carriers for passengers on domestic flights, and plans to assume from the U.S. Customs and Border Protection (CBP) the pre-departure name- matching function for passengers on international flights traveling to or from the United States. One of the most significant changes mandated by ATSA was the shift from the use of private-sector screeners to perform airport screening operations to the use of federal screeners (now referred to as TSOs). Prior to ATSA, passenger and checked baggage screening had been performed by private screening companies under contract to airlines. ATSA established TSA and required it to create a federal workforce to assume the job of conducting passenger and checked baggage screening at commercial airports. The federal screener workforce was put into place, as required, by November 2002. Passenger screening is a process by which personnel authorized by TSA inspect individuals and property to deter and prevent the carriage of any unauthorized explosive, incendiary, weapon, or other dangerous item into a sterile area or onboard an aircraft. Passenger screening personnel must inspect individuals for prohibited items at designated screening locations. The four passenger screening functions are X-ray screening of property, walk-through metal detector screening of individuals, hand-wand or pat- down screening of individuals, and physical search of property and trace detection for explosives. Typically, passengers are only subjected to X-ray screening of their carry-on items and screening by the walk-through metal detector. Passengers whose carry-on baggage alarms the X-ray machine, who alarm the walk-through metal detector, or who are designated as selectees—that is, passengers selected by the Computer Assisted Passenger Pre-Screening System (CAPPS) or other TSA-approved processes to designate passengers for additional screening—are screened by hand-wand or pat-down and have their carry-on items either screened for explosives traces or physically searched. Checked baggage screening is a process by which authorized security screening personnel inspect checked baggage to deter, detect, and prevent the carriage of any unauthorized explosive, incendiary, or weapon onboard an aircraft. Checked baggage screening is accomplished through the use of explosive detection systems or explosive trace detection systems, and through the use of approved alternative means, such as manual searches and canine teams when the explosive detection or explosive trace detection systems are unavailable. The passenger and checked baggage screening systems are composed of three elements: the people (TSOs) responsible for conducting the screening of airline passengers and their carry-on items and checked baggage, the technology used during the screening process, and the procedures TSOs are to follow to conduct screening. Collectively, these elements help to determine the effectiveness and efficiency of passenger and checked baggage screening operations. Air cargo ranges in size from one pound to several tons, and in type from perishables to machinery, and can include items such as electronic equipment, automobile parts, clothing, medical supplies, other dry goods, fresh cut flowers, fresh seafood, fresh produce, tropical fish, and human remains. Cargo can be shipped in various forms, including large containers known as unit loading devices that allow many packages to be consolidated into one container that can be loaded onto an aircraft, wooden crates, assembled pallets, or individually wrapped/boxed pieces, known as break bulk cargo. Participants in the air cargo shipping process include shippers, such as individuals and manufacturers; indirect air carriers, also referred to as freight forwarders or regulated agents; air cargo handling agents, who process and load cargo onto aircraft on behalf of air carriers; and passenger and all-cargo carriers that store, load, and transport air cargo. A shipper may take its packages to a freight forwarder, or regulated agent, which consolidates cargo from many shippers and delivers it to air carriers. A shipper may also send freight by directly packaging and delivering it to an air carrier’s ticket counter or sorting center where either the air carrier or a cargo handling agent will sort and load cargo onto the aircraft. The shipper may also have cargo picked up and delivered by an all-cargo carrier, or choose to take cargo directly to a carriers’ retail facility for delivery. TSA’s responsibilities for securing air cargo include, among other things, establishing security rules and regulations governing domestic and foreign passenger air carriers that transport cargo, domestic and foreign all-cargo carriers that transport cargo, and domestic indirect air carriers. TSA is also responsible for overseeing the implementation of air cargo security requirements by air carriers and indirect air carriers through compliance inspections, and, in coordination with DHS’s S&T Director, for conducting research and development of air cargo security technologies. Air carriers (passenger and all-cargo) are responsible for implementing TSA security requirements, predominantly through a TSA-approved security program that describes the security policies, procedures, and systems the air carrier will implement and maintain in order to comply with TSA security requirements. Air carriers must also abide by security requirements issued by TSA through security directives or emergency amendments to air carrier security programs. Air carriers use several methods and technologies to screen domestic and inbound air cargo. These include manual physical searches and comparisons between airway bills and cargo contents to ensure that the contents of the cargo shipment matches the cargo identified in documents filed by the shipper, as well as using approved technology, such as X-ray systems, explosive trace detection systems, decompression chambers, explosive detection systems, and certified explosive detection canine teams. Under TSA’s security requirements for domestic and inbound air cargo, passenger air carriers are currently required to randomly screen a specific percentage of non exempt air cargo pieces listed on each airway bill. All-cargo carriers are required to screen 100 percent of air cargo that exceeds a specific weight threshold. As of October 2006, domestic indirect air carriers are also required, under certain conditions, to screen a certain percentage of air cargo prior to its consolidation. TSA, however, does not regulate foreign freight forwarders, or individuals or businesses that have their cargo shipped by air to the United States. Under the Implementing Recommendations of the 9/11 Commission Act of 2007, DHS is required to implement a system to screen 50 percent of air cargo transported on passenger aircraft by February 2009, and 100 percent of such cargo by August 2010. The prescreening of airline passengers who may pose a security risk before they board an aircraft is one of many layers of security intended to strengthen commercial aviation. One component of prescreening is passenger watch-list matching—or the process of matching passenger information against the No-Fly and Selectee lists to identify passengers who should be denied boarding or who should undergo additional security scrutiny. Aircraft operators are currently responsible for checking passenger information against the No-Fly and Selectee lists to identify passengers who should be denied boarding or who should undergo additional security scrutiny. To further enhance commercial aviation security and in accordance with the Intelligence Reform and Terrorism Prevention Act of 2004 (IRTPA), TSA is developing a program to assume from air carriers the function of matching passenger information against government-supplied terrorist watch-lists for domestic flights. Secure Flight is the program through which TSA plans to meet this requirement. Following domestic implementation, TSA, through Secure Flight, plans to assume responsibility from CBP for watch-list matching of passengers on international flights bound to and from the United States. Secure Flight’s mission is to enhance the security of commercial air travel by: eliminating inconsistencies in current air carrier watch-list reducing the number of individuals who are misidentified as being on the No Fly or Selectee list, reducing the risk of unauthorized disclosure of sensitive watch-list information, and integrating the redress process so that individuals are less likely to be improperly or unfairly delayed or prohibited from boarding an aircraft. TSA plans to implement Secure Flight in three releases. During Release One, which is currently ongoing and is scheduled to be completed in March 2008, TSA is developing and testing the Secure Flight system. During Release Two, scheduled to be conducted from April 2008 through August 2008, TSA plans to begin parallel testing with air carriers during which both Secure Flight and air carriers will perform watch-list matching. Finally, during Release Three, TSA is to develop the capability for “airline cutovers” during which Secure Flight plans to begin conducting all watch- list matching for domestic air passengers. Release Three is scheduled to begin in September 2008. After Release Three, domestic cutovers are expected to begin in January 2009 and be completed in July 2009. TSA plans to assume from CBP watch-list matching for flights departing from and to the United States some time after domestic cutovers are completed. Over the last 4 years, we have reported that the Secure Flight program (and its predecessor CAPPS II) had not met key milestones or finalized its goals, objectives, and requirements, and faced significant development and implementation challenges. Acknowledging the challenges it faced with the program, TSA suspended the development of Secure Flight and initiated a reassessment, or re-baselining, of the program in February 2006, which was completed in January 2007. Since our last testimony on Secure Flight in February 2007, we were mandated by the Implementing Recommendations of the 9/11 Commission Act of 2007 to assess various aspects of Secure Flight’s development and implementation. In accordance with the act, we reviewed (1) TSA’s efforts to develop reliable cost and schedule estimates for Secure Flight; (2) progress made by TSA in developing and implementing the Secure Flight system, including the implementation of security controls; (3) TSA’s efforts to coordinate with CBP to integrate Secure Flight with CBP’s watch-list matching function for international flights; (4) TSA’s plans to protect private passenger information under Secure Flight; and (5) DHS’s efforts to assess the effectiveness of the current redress process for passengers misidentified as being on or wrongly assigned to the No Fly or Selectee list. TSA’s available funding for the Secure Flight program during fiscal year 2007 was $32.5 million. In fiscal year 2008, TSA received $50 million along with statutory authority to transfer up to $24 million to the program, making as much as $74 million available for the program in fiscal year 2008, if necessary. For fiscal year 2009, TSA has requested $82 million in funding to allow the agency to continue development and implementation of the Secure Flight program and the full assumption of the watch-list matching function in fiscal year 2010. According to DHS’s budget execution reports and TSA’s congressional budget justifications, TSA received appropriations for aviation security that total about $26 billion since fiscal year 2004. During fiscal year 2004—the first year for which data were available—TSA received about $3.9 billion for aviation security programs, and during fiscal year 2008, received about $6.1 billion. The President’s budget request for fiscal year 2009 includes about $6.0 billion to continue TSA’s aviation security activities. This total includes about $5.3 billion specifically designated for aviation security and about $0.76 billion for aviation-security related programs, such as Secure Flight, and mandatory fee accounts, such as the Aviation Security Capital Fund. Figure 1 identifies reported aviation security funding for fiscal years 2004 through 2008. TSA has taken significant steps to strengthen the three key elements of the screening system—people (TSOs and private screeners), screening procedures, and technology—but has faced management, planning, and funding challenges. For example, TSA developed a Staffing Allocation Model to determine TSO staffing levels at airports that reflect current operating conditions, and implemented several initiatives intended to enhance the detection of threat objects, particularly improvised explosives. We reported that TSA also proposed modifications to passenger checkpoint screening procedures based on risk (threat and vulnerability information), among other factors, but, as we previsouly reported, could do more evaluation of proposed procedures before they are implemented to help ensure that they achieve their intended results. Finally, TSA is exploring new technologies to enhance the detection of explosives and other threats, but continues to face management and funding challenges in developing and fielding technologies at airport checkpoints. Of the approximately $6.0 billion requested for aviation security in the President’s fiscal year 2009 budget request, about $4.0 billion, or approximately 66 percent, is for passenger and checked baggage screening. This includes approximately $3.9 billion to support passenger and checked baggage screening operations, such as TSO salaries and training, and about $154 million for the procurement and installation of checked baggage explosive detection systems. TSA has implemented several efforts intended to strengthen the allocation of its TSO workforce. We reported in February 2004 that staffing shortages and TSA’s hiring process had hindered the ability of some Federal Security Directors (FSD)—the ranking TSA authorities responsible for leading and coordinating security activities at airports—to provide sufficient resources to staff screening checkpoints and oversee screening operations at their checkpoints without using additional measures such as overtime. Since that time, TSA has developed a Staffing Allocation Model to determine TSO staffing levels at airports. In determining staffing allocations, the model takes into account the workload demands unique to each airport based on an estimate of each airport’s peak passenger volume. This input is then processed against certain TSA assumptions about screening passengers and checked baggage—including expected processing rates, required staffing for passenger lanes and baggage equipment based on standard operating procedures, and historical equipment alarm rates. In August 2005, TSA determined that the Staffing Allocation Model contained complete and accurate information on each airport from which to estimate staffing needs, and the agency used the model to identify TSO allocations for each airport. At that time, the staffing model identified a total TSO full- time equivalent allocation need of 42,303 TSOs. In addition to the staffing levels identified by the model, TSA sets aside TSO full-time equivalents for needs outside of those considered by the model in its annual allocation run for airports. For example, during the course of the year, certain airports may experience significant changes to their screening operations, such as the arrival of a new airline or opening of a new terminal. According to TSA officials, the agency established a reserve of 413 TSO full-time equivalents during fiscal year 2007 that can be used to augment the existing force, and began fiscal year 2008 with a reserve of 170 TSO full-time equivalents. TSA plans to continue with its use of a reserve force during fiscal year 2009 due to the dynamic nature of airport operations and the need to make staffing adjustments to meet changing operational requirements. Additionally, in order to handle short- term extraordinary needs at airports, TSA established a National Deployment Force—formerly known as the National Screening Force— comprised of TSOs and other TSA security staff who can be sent to airports to augment local TSO staff during periods of unusually high passenger volume, such as the Super Bowl. According to TSA, as of February 13, 2008, there were 451 TSOs in the National Deployment Force. The TSA fiscal year 2009 budget justification request identifies that TSA analyzes each request for support from the National Deployment Force from a cost, benefit, and risk perspective to ensure the optimal use of resources. The budget justification requests $34.3 million for operational expenses for the National Deployment Office—the office responsible for, among other things, deploying the National Deployment Force to those airports experiencing significant staffing shortfalls. FSDs we interviewed during 2006 as part of our review of TSA’s staffing model generally reported that the model is a more accurate predictor of staffing needs than TSA’s prior staffing model, which took into account fewer factors that affect screening operations. However, FSDs identified that some assumptions used in the fiscal year 2006 staffing model did not reflect actual operating conditions. For example, FSDs noted that the staffing model’s assumption of a 20 percent part-time workforce— measured in terms of full-time equivalents—had been difficult to achieve, particularly at larger (category X and I) airports, because of, among other things, economic conditions leading to competition for part-time workers, remote airport locations coupled with a lack of mass transit, TSO base pay that had not changed since fiscal year 2002, and part-time workers’ desire to convert to full-time status. We reported in February 2007 that TSA data showed that for fiscal years 2005 and 2006, the nation’s category X airports had a TSO workforce composed of about 9 percent part-time equivalents, and the part-time TSO attrition rate nationwide remained considerably higher than the rate for full-time personnel (approximately 46 percent versus 16 percent for full-time TSOs during fiscal year 2006). According to TSA’s fiscal year 2009 congressional budget justification, full-time TSO attrition nationwide decreased to 11.6 percent during 2007, and part-time attrition decreased to 37.2 percent. FSDs also expressed concern that the model did not specifically account for the recurrent training requirement for TSOs of 3 hours per week averaged over a fiscal year quarter. FSDs further identified that the model for fiscal year 2006 did not account for TSO’s time away from screening to perform operational support duties, such as payroll processing, scheduling, distribution and maintenance of uniforms, data entry, and workman’s compensation processing. To help ensure that TSOs are effectively utilized, we recommended that TSA establish a policy for when TSOs can be used to provide operational support. Consistent with our recommendation, in March 2007, TSA issued a management directive that provides guidance on assigning TSOs, through detail or permanent promotion, to duties of another position for a specified period of time. In response to FSDs’ input and the various mechanisms TSA had implemented to monitor the sufficiency of the model’s allocation outputs, TSA made changes to some assumptions in the model for fiscal year 2007. For example, TSA recognized that some airports cannot likely achieve a 20 percent part-time equivalent level and others, most likely smaller airports, may operate more efficiently with other levels of part-time TSO staff. As a result, for fiscal year 2007, TSA modified the assumption in its Staffing Allocation Model to include a variable part-time goal based on each airport’s historic part-time to full-time TSO ratio. TSA also included an allowance in the model for fiscal 2007 to provide additional assurance that TSOs complete required training on detecting improvised explosive devices, as well as an allowance for operational support duties to account for the current need for TSOs to perform these duties. In our February 2007 report on the Staffing Allocation Model, we recommended that TSA establish a formal, documented plan for reviewing all of the model assumptions on a periodic basis to ensure that the assumptions result in TSO staffing allocations that accurately reflect operating conditions that may change over time. TSA agreed with our recommendation and, in December 2007, developed a Staffing Allocation Model Rates and Assumptions Validation Plan. The plan identifies the process TSA will use to review and validate the model’s assumptions on a periodic basis. Although we did not independently review TSA’s staffing allocation for fiscal year 2008, the TSA fiscal year 2009 budget justification identified that the agency has achieved operational and efficiency gains that enabled them to implement or expand several workforce initiatives involving TSOs, which are summarized in table 2. For example, TSA reported making several changes to the fiscal year 2008 Staffing Allocation Model, such as decreasing the allocation for time paid not worked (annual, sick, and military leave; compensatory time; and injury time off) from a 14.5 percent to 14 percent based on past performance data. TSA also reported revising the exit lane staffing based on each checkpoint’s unique operating hours rather than staffing all exit lanes based on the maximum open hours for any checkpoint at an airport. TSA’s fiscal year 2009 budget justification includes $2.7 billion for the federal TSO workforce represents an increase of about $80 million over fiscal year 2008. Of the $80 million increase, about $38 million is for cost of living adjustments, and about $42 million is for the annualization of the full-year cost of the Behavior Detection Officer and Aviation Direct Access Screening Program positions. According to the budget justification, the $2.7 billion includes funding for compensation and benefits of 45,643 full- time equivalent personnel—approximately 46,909 TSOs and about 1,100 screening managers. Table 3 identifies the total TSO and screening manager full-time equivalents and the funding levels for fiscal years 2005 through 2008, as reported by TSA. In addition to TSA’s efforts to deploy a federal TSO workforce, TSA has taken steps to strengthen passenger checkpoint screening procedures to enhance the detection of prohibited items. However, we have identified areas where TSA could improve its evaluation and documentation of proposed procedures. In April 2007, we reported that TSA officials considered modifications to its standard operating procedure (SOP) based on risk information (threat and vulnerability information), daily experiences of staff working at airports, and complaints and concerns raised by the traveling public. In addition to these factors, consistent with its mission, TSA senior leadership made efforts to balance the impact that proposed SOP modifications would have on security, efficiency, and customer service when deciding whether proposed SOP modifications should be implemented. For example, in August 2006, TSA sought to increase security by banning liquids and gels from being carried onboard aircraft in response to the alleged terrorist plot to detonate liquid explosives onboard multiple aircraft en route from the United Kingdom to the United States. In September 2006, after obtaining more information about the alleged terrorist plot—to include information from the United Kingdom and U.S. intelligence communities, discussions with explosives experts, and testing of explosives—TSA officials decided to lift the total ban on liquids and gels to allow passengers to carry small amounts of liquids and gels onboard aircraft. TSA officials also lifted the total ban because banning liquids and gels as carry-on items was shown to affect both efficiency and customer service. In an effort to harmonize its liquid screening procedures with other countries, in November 2006, TSA revised its procedures to allow 3.4 fluid ounces of liquids, gels, and aerosols onboard aircraft. We further reported that for more significant SOP modifications, TSA first tested the proposed modifications at selected airports to help determine whether the changes would achieve their intended purpose, as well as to assess its impact on screening operations. TSA’s efforts to collect quantitative data through testing proposed procedures prior to deciding whether to implement or reject them is consistent with our past work that has shown the importance of data collection and analyses to support agency decision making. However, we reported that TSA’s data collection and analyses could be improved to help TSA determine whether proposed procedures that are operationally tested would achieve their intended purpose. Specifically, we found that for tests of proposed screening procedures TSA conducted from April 2005 through December 2005, including the removal of small scissors and small tools from the prohibited items list, although TSA collected some data on the efficiency of and customer response to the procedures at selected airports, the agency generally did not collect the type of data or conduct the necessary analysis that would yield information on whether the proposed procedures would achieve their intended purpose. We also found that TSA’s documentation on proposed modifications to screening procedures was not complete. We recommended that TSA develop sound evaluation methods, when possible, to assess whether proposed screening changes would achieve their intended purpose and generate and maintain documentation on proposed screening changes that are deemed significant. DHS generally agreed with our recommendations and TSA has taken steps to implement them. For example, for several proposed SOP changes considered during the fall of 2007, TSA provided documentation that identified the sources of the proposed changes and the reasons why the agency decided to accept or reject the proposed changes. With regard to our recommendation to develop sound evaluation methods when assessing proposed SOP modifications, when possible, TSA reported that it is working with subject matter experts to ensure that the agency’s operational tests related to proposed changes to screening procedures are well designed and executed, and produce results that are scientifically valid and reliable. These actions, when fully implemented, should enable TSA to better justify its passenger screening procedure modifications to Congress and the traveling public. Once proposed SOP changes have been implemented, it is important that TSA have a mechanism in place to ensure that TSOs are complying with established procedures. In our April 2007 report, we identified that TSA monitors TSO compliance with passenger checkpoint screening SOPs through its performance accountability and standards system and through local and national covert testing. According to TSA officials, the performance accountability and standards system was developed in response to a 2003 report by us that recommended that TSA establish a performance management system that makes meaningful distinctions in employee performance, and in response to input from TSA airport staff on how to improve passenger and checked baggage screening measures. This system is used by TSA to assess agency personnel at all levels on various competencies, including, among other things, technical proficiency. During fiscal year 2007, the technical proficiency component of the performance accountability and standards system for TSOs focused on TSO knowledge of screening procedures; image recognition; proper screening techniques; and the ability to identify, detect, and locate prohibited items. In addition to implementing the performance accountability and standards system, TSA also conducts local and national covert tests to evaluate, in part, the extent to which TSOs’ noncompliance with SOPs affects their ability to detect simulated threat items hidden in accessible property or concealed on a person. In our April 2007 report, we identified that some TSA airport officials have experienced resource challenges in implementing these compliance monitoring efforts. TSA headquarters officials stated that they were taking steps, such as automating the performance accountability and standards system data entry functions, to address this challenge. Since then, TSA has also implemented a new local covert testing program nationwide, known as the Aviation Screening Assessment Program. This program is intended to measure TSO performance using realistic and standardized test scenarios to achieve a national TSO assessment measurement. According to TSA’s fiscal year 2009 congressional budget justification, this national baseline measurement will be achieved by conducting a total of 48,000 annual tests. TSA plans to use the test results to identify vulnerabilities across screening operations and to provide recommendations for addressing the vulnerabilities to various stakeholders within TSA. We reported in February 2007 that DHS S&T and TSA were exploring new passenger checkpoint screening technologies to enhance the detection of explosives and other threats. However, we found that limited progress had been made in fielding explosives detection technology at passenger screening checkpoints, in part due to challenges DHS S&T and TSA faced in coordinating research and development efforts. TSA requested $103.2 million in its fiscal year 2009 budget request for checkpoint technology and checkpoint reconfiguration. Specifically, the request includes $91.7 million to, among other things, procure and deploy Advanced Technology Systems to further extend explosives and prohibited item detection coverage at category X and I checkpoints. The budget request identifies that equipment purchases may also include the Whole Body Imager, Bottled Liquids Scanner, Cast and Prosthesis Imager, shoe scanner systems, technology integration solutions, and additional units or upgrades to legacy equipment, and other technologies. TSA further requested $11.5 million to support the optimization and reconfiguration of additional checkpoint lanes to accommodate anticipated airport growth and maintain throughput at the busiest airport checkpoints. Of the various emerging checkpoint screening projects funded by TSA and DHS S&T, the explosive trace portal and the bottled liquids scanning device have been deployed to airport checkpoints, and a number of additional projects have initiated procurements or are being researched and developed. Projects which have initiated procurements include the cast and prosthesis scanner and advanced technology systems. Projects currently in research and development include the checkpoint explosives detection system and the whole body imager. Table 4 provides a description of passenger checkpoint screening technologies that have been deployed as well as technologies that have initiated procurements or are in research and development. This list of technologies is limited to those for which TSA could provide documentation. TSA is planning to develop and deploy additional technologies. We are continuing to assess TSA’s deployment of new checkpoint screening technologies in our ongoing work and expect to report on the results of this work later this year. Despite TSA’s efforts to develop passenger checkpoint screening technologies, we reported that limited progress has been made in fielding explosives detection technology at airport checkpoints. For example, we reported that TSA had anticipated that the explosives trace portals would be in operation throughout the country during fiscal year 2007. However, due to performance and maintenance issues, TSA halted the acquisition and deployment of the portals in June 2006. As a result, TSA has fielded less than 25 percent of the 434 portals it projected it would deploy by fiscal year 2007. TSA officials are considering what to do with the portals that were procured and are currently in storage. In addition to the portals, TSA has fallen behind in its projected acquisition of other emerging screening technologies. For example, we reported that the acquisition of 91 Whole Body Imagers was previously delayed in part because TSA needed to develop a means to protect the privacy of passengers screened by this technology. TSA also reduced the initial number of the cast and prosthesis scanner units to be procured during fiscal year 2007 due to unexpected maintenance cost increases. Furthermore, fiscal year 2008 funding to procure additional cast and prosthesis scanners was shifted to procure more Whole Body Imagers and Advanced Technology Systems due to a change in priorities. While TSA and DHS have taken steps to coordinate the research, development, and deployment of checkpoint technologies, we reported in February 2007 that challenges remained. For example, TSA and DHS S&T officials stated that they encountered difficulties in coordinating research and development efforts due to reorganizations within TSA and S&T. A senior TSA official further stated at the time that, while TSA and the DHS S&T have executed a memorandum of understanding to establish the services that the Transportation Security Laboratory is to provide to TSA, coordination with S&T remained a challenge because the organizations had not fully implemented the terms of the agreement. Since our February 2007 testimony, according to TSA and S&T, coordination between them has improved. We also reported that TSA did not have a strategic plan to guide its efforts to acquire and deploy screening technologies, and that a lack of a strategic plan or approach could limit TSA’s ability to deploy emerging technologies at those airport locations deemed at highest risk. The Consolidated Appropriations Act, 2008, provides that, of TSA’s appropriated funds for Transportation Security Support, $10,000,000 may not be obligated until the Secretary of Homeland Security submits to the House and Senate Committees on Appropriations detailed expenditure plans for checkpoint support and explosive detection systems refurbishment, procurement, and installation on an airport-by-airport basis for fiscal year 2008, along with the strategic plan for checkpoint technologies previously requested by the committees. The Act further requires that the expenditure and strategic plans be submitted no later than 60 days after the date of enactment of the Act (enacted December 26, 2007). According to TSA officials, they currently plan to submit the strategic plan to Congress by June 2008. We will continue to evaluate DHS S&T’s and TSA’s efforts to research, develop and deploy checkpoint screening technologies as part of our ongoing review. TSA has taken steps to enhance domestic and inbound air cargo security, but more work remains to strengthen this area of aviation security. For example, TSA has issued an Air Cargo Strategic Plan that focused on securing the domestic air cargo supply chain. However, in April 2007, we reported that this plan did not include goals and objectives for addressing the security of air cargo transported into the United States from another country, which presents different security challenges than cargo transported domestically. We also reported that TSA had not conducted vulnerability assessments to identify the range of security weaknesses that could be exploited by terrorists related to air cargo operations, and recommended that TSA develop a methodology and schedule for completing these assessments. In response, in part, to our recommendation, TSA implemented an Air Cargo Vulnerability Assessment program and plans to complete assessments of all Category X airports by 2009. In addition, we also reported that TSA had established requirements for air carriers to randomly screen air cargo, but had exempted some domestic and inbound cargo from screening. To address these exemptions, TSA issued a security directive and emergency amendment in October 2006 to domestic and foreign air carriers operating within and from the United States that limited the screening exemptions. Moreover, based on our recommendation to systematically analyze compliance inspection results and use the results to target future inspections, TSA recently reported that the agency has increased the number of inspectors dedicated to conducting domestic air cargo compliance inspections, and has begun analyzing the results of these inspections to prioritize their inspections on those entities that have the highest rates of noncompliance, as well as newly approved entities that have yet to be inspected. With respect to inbound air cargo, we reported that TSA lacked an inspection plan with performance goals and measures for its inspection efforts, and recommended that TSA develop such a plan. In response to our recommendation, TSA officials stated that the agency formed an International Cargo Working Group to develop inspection prompts to guide inspectors in their examinations of foreign and U.S. air cargo operators departing from foreign locations to the United States. In addition to taking steps to strengthen inspections of air cargo, TSA is working to enhance air cargo screening technologies. Specifically, we reported in October 2005 and again in April 2007 that TSA, working with DHS’s S&T, was developing and pilot testing a number of technologies to assess their applicability to screening and securing air cargo. According to TSA officials, the agency will determine whether it will require the use of any of these technologies once it has completed its assessments and analyzed the results. Finally, TSA is taking steps to compile and analyze information on air cargo security practices used abroad to identify those that may strengthen DHS’s overall air cargo security program, as we recommended. According to TSA officials, the design of the Certified Cargo Screening Program is based on the agency’s review of foreign countries’ models for using government-certified shippers and freight forwarders to screen air cargo earlier in the supply chain. TSA officials believe that this program will assist the agency in meeting the requirement to screen 100 percent of air cargo transported on passenger aircraft by August 2010, as mandated by the Implementing Recommendations of the 9/11 Commission Act of 2007. We have not independently reviewed the Certified Cargo Screening Program. DHS has taken steps towards applying a risk-based management approach to addressing air cargo security, including conducting assessments of the threats posed to air cargo operations. However, we have reported that opportunities exist to strengthen these efforts. Applying a risk management framework to decision making is one tool to help provide assurance that programs designed to combat terrorism are properly prioritized and focused. As part of TSA’s risk-based approach, TSA issued an Air Cargo Strategic Plan in November 2003 that focused on securing the domestic air cargo supply chain. However, in April 2007, we reported that this plan did not does not include goals and objectives for addressing inbound air cargo security, or cargo that is transported into the United States from another country, which presents different security challenges than cargo transported domestically. To ensure that a comprehensive strategy for securing inbound air cargo exists, we recommended that DHS develop a risk-based strategy to address inbound air cargo security that should define TSA’s and CBP’s responsibilities for ensuring the security of inbound air cargo. In response to our recommendation, CBP issued its International Air Cargo Security Strategic Plan in June 2007. While this plan identifies how CBP will partner with TSA, it does not specifically address TSA's responsibilities in securing inbound air cargo. According to TSA officials, the agency plans to revise its Air Cargo Strategic Plan during the third quarter of fiscal year 2008, and will incorporate a strategy for addressing inbound air cargo security, including how the agency will partner with CBP. TSA reported that the updated strategic plan will also incorporate the requirement that TSA develop a system to screen 100 percent of air cargo prior to its transport on passenger aircraft as required by the Implementing Recommendations of the 9/11 Commission Act of 2007. In addition to developing a strategic plan, a risk management framework in the homeland security context should include risk assessments, which typically involve three key elements—threats, vulnerabilities, and criticality or consequence. Information from these three assessments provides input for setting priorities, evaluating alternatives, allocating resources, and monitoring security initiatives. In September 2005, TSA’s Office of Intelligence completed an overall threat assessment for air cargo, which identified general and specific threats to both domestic and inbound air cargo. However, in October 2005, and again in April 2007, we reported that TSA had not conducted vulnerability assessments to identify the range of security weaknesses that could be exploited by terrorists related to air cargo operations, and recommended that TSA develop a methodology and schedule for completing these assessments. In response, in part, to our recommendation, TSA implemented an Air Cargo Vulnerability Assessment program in November 2006. TSA officials reported that to date, the agency has completed vulnerability assessments at six domestic airports and plans to complete vulnerability assessments at all domestic Category X airports by 2009. Officials further stated that the results of these assessments will assist the agency with its efforts to collaborate with foreign governments to conduct joint assessments at foreign airports that will include a review of air cargo vulnerabilities. In October 2005 and April 2007, we also reported that TSA had established requirements for air carriers to randomly screen air cargo, but had exempted some domestic and inbound cargo from screening. We recommended that TSA examine the rationale for existing domestic and inbound air cargo screening exemptions and determine whether such exemptions left the air cargo system unacceptably vulnerable. TSA established a working group to examine the rationale for these exemptions, and in October 2006, issued a security directive and emergency amendment to domestic and foreign passenger air carriers operating within and from the United States that limited the screening exemptions. The security directive and emergency amendment, however, did not apply to inbound air cargo. The Implementing Recommendations of the 9/11 Commission Act of 2007 requires DHS to conduct an assessment of screening exemptions granted under 49 U.S.C. § 44901(i)(1) for cargo transported on passenger aircraft and an analysis to assess the risk of maintaining such exemptions. According to TSA, the agency will propose a number of revisions to certain alternate means of screening for particular cargo types transported on passenger aircraft departing from both domestic and foreign locations in its assessment of current screening exemptions. Although this report was due to Congress by December 3, 2007, it has yet to be submitted. We also reported that TSA conducted compliance inspections of air carriers to ensure that they are meeting existing air cargo security requirements. However, in October 2005, we found that TSA had not developed measures to assess the adequacy of air carrier compliance with air cargo security requirements, or assessed the results of its domestic compliance inspections to target higher-risk air carriers or indirect air carriers for future reviews. TSA has since reported that the agency has increased the number of inspectors dedicated to conducting domestic air cargo inspections, and has begun analyzing the results of the compliance inspections to prioritize their inspections on those entities that have the highest rates of noncompliance, as well as newly approved entities that have yet to be inspected. With respect to inbound air cargo, we reported in April 2007 that TSA lacked an inspection plan with performance goals and measures for its inspection efforts, and recommended that TSA develop such a plan. In February 2008, TSA officials stated that the agency formed an International Cargo Working Group to develop inspection prompts to guide International Cargo Transportation Security Inspectors in their inspections of the various air cargo operations. According to TSA, using these prompts will allow the agency to evaluate both foreign and U.S. air cargo operators departing from foreign locations to the United States. In addition to taking steps to strengthen inspections of air cargo, TSA is working to enhance air cargo screening technologies. Specifically, we reported in October 2005 and again in April 2007 that TSA, working with S&T, was developing and pilot testing a number of technologies to assess their applicability to screening and securing air cargo. These efforts included an air cargo explosives detection pilot program implemented at three airports; an EDS pilot program; an air cargo security seals pilot; the use of hardened unit-loading devices; and the use of pulsed fast neutron analysis. According to TSA officials, the agency will determine whether it will require the use of any of these technologies once it has completed its assessments and analyzed the results. As of February 2008, TSA has provided timeframes for completing one of these assessments, the EDS cargo pilot program. DHS officials added that once the department has determined which technologies it will approve for use for domestic air cargo, they will consider the use of these technologies for enhancing the security of inbound air cargo shipments. According to TSA officials, the federal government and the air cargo industry face several challenges that must be overcome to effectively implement any of these technologies to screen or secure air cargo. These challenges include factors such as the nature, type, and size of cargo to be screened; environmental and climatic conditions that could impact the functionality of screening equipment; slow screening throughput rates; staffing and training issues for individuals who screen air cargo; the location of air cargo facilities; the cost and availability of screening technologies; and employee health and safety concerns, such as worker exposure to radiation. According to TSA officials, there is no single technology capable of efficiently and effectively screening all types of air cargo for the full range of potential terrorist threats, including explosives and weapons of mass destruction. Our review of inbound air cargo security also identified some security practices that are currently not used by TSA but that could help strengthen the security of inbound and domestic air cargo supply chains. In April 2007, we recommended that TSA, in collaboration with foreign governments and the U.S. air cargo industry, systematically compile and analyze information on air cargo security practices used abroad to identify those that may strengthen the department’s overall air cargo security program. TSA agreed with this recommendation and, since the issuance of our report, proposed a new program, the Certified Cargo Screening Program, to assist the agency in meeting the requirement to screen 100 percent of air cargo transported on passenger aircraft by August 2010, as mandated by the Implementing Recommendations of the 9/11 Commission Act of 2007. According to TSA officials, the agency reviewed the models used by two foreign countries to use government-certified screeners to screen air cargo earlier in the supply chain, when designing their Certified Cargo Screening Program. TSA officials stated that the intention of the Certified Cargo Screening Program is to allow large shippers and/or manufacturers, who are certified by TSA, referred to as TSA-Certified Cargo Screening Facilities, to screen air cargo before it leaves the factory. According to TSA officials, employees performing the screening at these certified facilities would need to undergo a security threat assessment, and be trained in screening and inspection procedures. The facilities would also have to purchase the necessary screening equipment. After screening, the cargo would be secured with a tamper resistant seal and transported to the airport for shipment. The air carriers will be responsible for ensuring that 100 percent of cargo that they accept for transport has been screened by the TSA-Certified Cargo Screening Facilities. In January 2008, TSA began phase one of its pilot testing at one airport and plans to expand this pilot program to five other airports within three months. According to TSA, as part of its plans to screen 100 percent of air cargo on passenger aircraft, the agency also plans to pilot test a proposed system for targeting specific domestic air cargo shipments, referred to as Freight Assessment. Specifically, the Freight Assessment System will identify elevated risk cargo at various points in the supply chain for additional scrutiny, which could include secondary screening. TSA, however, did not provide us with information on the duration of the pilot test or when the Freight Assessment System would be fully operational. For fiscal year 2009, the President’s budget includes a request of about $100 million for TSA’s air cargo security program, Specifically, TSA is requesting $51.9 million for 450 air cargo inspectors, $26.5 million for 170 canine teams, and $15.9 million for the Certified Cargo Screening Program. TSA has made substantial progress in instilling more discipline and rigor into Secure Flight’s development and implementation since we last reported on the program in February 2007, but challenges remain that may hinder the program’s progress moving forward. TSA developed a detailed concept of operations, established a cost and schedule baseline, and drafted key management and systems development documents, among other systems development efforts. TSA also has plans to integrate DHS’s domestic and international watch-list matching functions, and has strengthened efforts to protect passenger information, including publishing a proposed rulemaking for the Secure Flight Program and privacy notices that address key privacy protection principles, consistent with our past recommendations. However, despite these successes, TSA continues to face some program management challenges in developing the program. Specifically, while TSA developed a life-cycle cost estimate and an integrated master schedule for Secure Flight, the program has not fully followed best practices that would help to ensure reliable and valid cost and schedule estimates, and the program schedule has experienced slippages. We also found that TSA can strengthen its systems development efforts by demonstrating that it has fully implemented its risk management plan, incorporated end-to-end testing as part of the program’s testing strategy, and more fully addressed system security requirements and vulnerabilities. We also found that DHS and TSA can strengthen their assessment of the current redress process for passengers who believe they were inappropriately inconvenienced during the watch-list matching process. TSA officials stated that they have considerably strengthened Secure Flight’s systems development efforts, and have already taken or plan to take action to address the issues we identified. TSA has taken numerous steps to address previous GAO recommendations related to strengthening Secure Flight’s development and implementation, as well as additional steps designed to strengthen the program. TSA has, among other things, developed a detailed, conceptual description of how the system is to operate, commonly referred to as a concept of operations; established a cost and schedule baseline; developed security requirements; developed test plans; conducted outreach with key stakeholders; published a notice of proposed rulemaking on how Secure Flight is to operate; and issued a guide to key stakeholders (e.g., air carriers and CBP) that defines, among other things, system data requirements. Collectively, these efforts have enabled TSA to more effectively manage the program’s development and implementation. TSA has also taken steps to integrate the domestic watch-list matching function with the international watch-list matching function currently operated by CBP. We previously reported that TSA was developing Secure Flight to conduct watch-list matching for passengers on domestic flights while, separately, CBP was revising its process for conducting watch-list matching for passengers on flights bound to and from the United States, with limited coordination in their efforts. We reported that this lack of coordination could result in a duplication of effort and conflicting results from domestic and international watch-list matching, as well as create burdens for air carriers who may have been required to operate two separate systems to conduct the domestic and international watch-list matching functions. We recommended that DHS take additional steps and make key policy and technical decisions that were necessary to more fully coordinate these programs. TSA and CBP have since worked with DHS to develop a strategy called the One DHS Solution, which is to align the two agencies’ domestic and international watch-list matching processes, information technology systems, and regulatory procedures to provide a seamless interface between DHS and the airline industry. In line with this strategy, the agencies have agreed that TSA will take over international watch-list matching from CBP, with CBP continuing to perform, among other things, its border-related functions. Further, TSA and CBP have coordinated their efforts to facilitate consistency across their programs. For example, in August 2007, they jointly developed and issued a user’s guide to the airlines and other stakeholders specifying the data that agencies will need to request from passengers in the future to minimize the impact on systems programming due to the integration of the two programs. TSA and CBP officials plan to pursue further integration as they progress towards developing and implementing the watch-list matching function for international flights. and (2) TSA issued the Secure Flight Notice of Proposed Rulemaking (NPRM), which identifies DHS’ plans to assume watch-list matching responsibilities from air carriers for domestic flights. (72 Fed. Reg. 48,356 (Aug. 23, 2007)). key privacy protection principles. For example, these notices describe the information that will be collected from passengers and air carriers, as well as the purpose and planned uses of the data to be collected. TSA also developed a Program Privacy Architecture describing key aspects of TSA’s plans to protect private passenger information, such as embedding privacy experts into program teams, developing privacy requirements documentation, and implementing technical controls to protect privacy such as network security controls. We will continue to monitor their efforts as part of our ongoing work to ensure that privacy protections continue to be appropriately considered. Although TSA has developed a life-cycle cost estimate and maintains an integrated master schedule for Secure Flight, the program has not fully followed best practices for developing reliable and valid cost and schedule estimates, and several program milestones have been missed or have slipped. The Office of Management and Budget (OMB) endorsed the useof GAO’s Cost Assessment Guide in the development of life-cycle cost and program schedule estimates. The ability to generate reliable cost and schedule estimates is a critical function necessary to support OMB’s capital programming process. Without adhering to these best practices in the development of its cost and schedule estimates, TSA is at risk of the Secure Flight program experiencing cost overruns, missed deadlines, and performance shortfalls. Life-cycle cost estimate. We found that TSA has not fully followed best practices for developing a reliable and valid life-cycle cost estimate. Using our Cost Assessment Guide’s 12-step process for creating cost estimates, we assessed the Secure Flight cost estimate against these best practices. The Guide outlines a 12-step process, which if followed correctly, should result in high quality, reliable, and valid cost estimates. DHS’s Cost - Benefit Analysis Guidebook, which TSA program officials stated that TSA used to develop the life-cycle cost estimate for Secure Flight, contains most of the best practices outlined in our Guide. TSA followed some of these practices in developing its cost estimate, including defining the purpose of the program and estimate purpose; identifying many program cost elements, including expenditures for facilities, hardware, and software; and identifying the numbers of staff, their pay, and associated travel and training costs, among other elements. However, it is unclear whether TSA followed other best practices or did not address the practices in developing its estimate. For example, it is unclear whether the cost estimate had been updated to reflect the current program because the detailed support for the estimate was produced between 2004 and 2006, and does not reflect the current program plan. In addition, the cost estimate does not capture all key costs. For example, the estimate does not capture costs beyond 2012 even though the system is expected to be operational beyond that date. Secure Flight's Acquisition Program Baseline states that life-cycle costs will run from FY 2002 through FY 2020 and assumes operations of the program through 2020. The cost estimate documentation also did not provide a step-by-step description of the cost estimating process, data sources, and methods used to develop the underlying cost elements consistent with best practices. Finally, TSA did not analyze the amount of certainty it had in its estimate and an independent cost estimate was not developed to assess the reasonableness of the estimate, consistent with best practices. TSA officials stated that the program’s cost figures were updated in 2007 and continue to be updated as changes warrant. Officials further stated that their estimates were prepared in accordance with DHS and OMB guidance and were reviewed and approved by DHS and OMB. However, without adhering to the best practices discussed above, as recommended by OMB, TSA’s cost estimate may not provide a meaningful baseline from which to track progress, and effectively support investment decision making. Schedule estimate. We found that TSA also did not fully follow best practices for developing a reliable and valid schedule estimate. GAO's Cost Assessment Guide includes 9 best practices, which if followed correctly, should result in high quality, reliable, and valid schedule estimates. Without a reliable schedule baseline and careful monitoring of its status, a program may not be able to determine when forecasted completion dates differ from planned dates. TSA has made progress in developing a reliable and valid schedule estimate, including capturing key activities and accounting for the development of program requirements and testing. However, TSA officials could not provide evidence that their scheduling software can produce a critical path (i.e., the longest path of sequential activities in a schedule) driven by discrete lower level tasks. Best practices call for the critical path to be generated using scheduling software. We also found that the schedule is not fully integrated because several lower level activities were not connected in a logical manner, as called for by best practices. As a result, the Secure Flight schedule estimate may not provide a meaningful benchmark from which to gauge progress, identify and address potential problems, and make informed decisions. For example, the inability to institute a reliable schedule could affect TSA’s ability to effectively measure contractor performance in meeting deliverables. TSA officials stated that their scheduling software can create a critical path, and that lower level tasks in their schedule were logically linked together; however, they did not provide evidence that supported this. Since TSA completed a re-baselining of the Secure Flight program, and began using its current schedule, the program has missed milestones and experienced schedule slippages. For example, while TSA reports that it has met most of its March 2007 schedule milestones to date, the August 2007 milestone for developing memoranda of understanding and other written agreements (e.g. service level agreements) with key Secure Flight stakeholders (e.g. CBP) was missed and has not yet been met. TSA officials attributed schedule slippages in part to an extension in the Secure Flight rulemaking comment period and underestimating the time needed to complete key activities. In addition, TSA has not conducted a schedule risk analysis to determine the level of confidence it has in meeting the system’s completion date, and has not conducted a cost and schedule risk assessment, consistent with best practices. The cost and schedule risk assessment recognizes the inter-relationship between schedule and cost and captures the risk that schedule durations and cost estimates may vary due to, among other things, limited data, optimistic estimating, technical challenges, lack of qualified personnel, and too few staff to do the work. Without these assessments, TSA has less assurance that it is effectively managing risk associated with Secure Flight’s cost and schedule. We will continue to assess TSA’s life-cycle cost and schedule estimates as part of our ongoing review of the Secure Flight Program. While TSA has taken numerous steps to strengthen the development of Secure Flight, additional challenges remain. These challenges include: 1) implementing the program’s risk management plan, 2) planning and conducting end-to-end testing as part of their overall parallel testing strategy, and 3) addressing information security requirements and vulnerabilities. Risk management. In October 2006, TSA issued a risk management plan for identifying, managing, and mitigating Secure Flight program risks that was consistent with relevant guidance and best practices. TSA also acquired an electronic tool to guide its risk management efforts. However, TSA has not yet provided us with evidence that it has implemented all aspects of the plan, including developing an inventory of risks and related information to demonstrate that its risk management tool has been populated and is being used to identify, prioritize, mitigate, and monitor risk. Federal guidance and related best practices recognize the importance of proactively managing risks during systems development and implementation, and advocate a program’s use of a risk management plan. However, although TSA developed a risk management plan, the agency only recently, in December 2007, established a risk management board to manage program risks as called for by the plan. TSA officials stated that the risk management board has met three times since December 2007, and, in January 2008, compiled an updated and consolidated inventory of all program risks, including ranking and mitigation strategies. However, TSA officials have not provided us with documentation identifying the board’s activities and resulting risk inventory. Prior to December 2007, in lieu of a formal risk management board, program officials stated that each project team addressed risks as part of biweekly project management meetings. However, we found these efforts to be limited in that the risks discussed did not include priority rankings such as probability and impact, and many did not have mitigation strategies, as required by the program’s risk management plan. In November 2007, TSA hired a risk management coordinator, a position that had been vacant since June 2007. According to program officials, the coordinator has been tasked with supporting the risk management board in implementing the risk management plan and has provided related training for its members. Secure Flight officials stated that although they have not fully implemented their risk management plan, they believe that they are effectively managing program risks through the methods previously discussed, and that over the past few months, have enhanced their risk management efforts. However, until the risk management plan is appropriately implemented, there is an increased chance that program risks will not be proactively mitigated and may result in program cost overruns, and schedule and performance shortfalls. We will continue to assess TSA’s efforts to mange risk as part of our ongoing review of Secure Flight. End-to-end test planning. Secure Flight does not fully outline plans for end-to-end testing in its overall test and evaluation plan, or other test plans. Federal guidance and related best practices recommend end-to-end testing to verify that the systems that collectively support a program like Secure Flight will interoperate as intended in an operational environment, either actual or simulated. We reported in March 2005 on the importance of Secure Flight end-to-end testing and recommended that TSA perform such testing. TSA agreed with this recommendation. However, Secure Flight’s current test and evaluation master plan only outlines plans for partner organizational entities (e.g., CBP for integration of international watch-list functions) to test their respective parts of the system on their own—rather than a coordinated end-to-end test involving all parties. TSA developed a preliminary working draft of an end-to-end testing strategy, called the parallel testing strategy. However, the plan does not contain provisions for (1) testing that ensures that supporting systems will operate as intended in an operational environment, (2) definitions and dates for key milestone activities and parties responsible for completing them, or (3) the revision of other test plans, such as the test and evaluation master plan, to reflect the performance of end-to-end tests. Secure Flight officials stated that they plan to conduct full end-to-end testing of the program, beginning in the Spring of 2008, and that they will reflect this testing in test plans that are still under development. While we commend TSA’s plans to conduct end-to-end testing, the draft of TSA’s test plan that discusses end- to-end testing does not define a scope that extends to all aspects of the program. Until TSA has well-defined and approved end-to-end test plans and procedures, it will be challenged in its ability to demonstrate that Secure Flight will perform in a way that will allow it to achieve intended program outcomes and results. We will continue to assess TSA’s testing strategy, to include end-to-end testing, as part of our ongoing review of the program. Information security. While the Secure Flight program office has completed important steps to incorporate security into the system’s development, it has not fully completed other steps to ensure security is effectively addressed. Federal standards and guidance identify the need to address information security throughout the life-cycle of information systems, and specifies a minimum set of security steps needed to effectively incorporate security into a system during its development. The Secure Flight program has performed several steps that incorporate security into the system’s development, including performing a security risk assessment, identifying and documenting recommended security control requirements, and testing and evaluating security controls for the system and incorporating identified weaknesses in remedial action plans. However, other steps pertaining to ensuring that security requirements are tested, preparing security documentation, and conducting certification and accreditation activities were not adequately completed. For example, security requirements planned for Release One did not always trace to test activities for this release. Program officials stated that some security requirements were deferred until future releases due to delays in funding for acquiring specific hardware and other requirements require coordination with the information system security official to verify whether they were tested as part of security test and evaluation. In addition, security documentation contained incorrect or incomplete information. To illustrate, the systems security plan did not identify all interconnecting systems that Secure Flight will interface with, such as those operated by the DHS Watch-List Service, the organization that will transmit the watch-list to Secure Flight. Program officials stated that security documentation was outdated or incorrect because there was insufficient time to update the documentation for changes in the computing environment and security requirements. Furthermore, program officials granted an authorization to operate—one of three possible accreditation decisions made in the certification and accreditation process—although the system had 46 known vulnerabilities, including 11 high-risk and 27 moderate-risk vulnerabilities and the controls had not yet been implemented. Federal guidance as well as DHS policy provide for an interim authority to operate accreditation when significant restrictions or limitations exist and certain deficiencies and corrective actions need to be addressed within a specified period. Although security officials identified plans of actions and milestones for addressing the vulnerabilities within 60 and 90 days for the high and moderate risks, respectively, given their significance, an interim authorization to operate would be the more appropriate determination. In addition, hardware components used to implement controls over user identity and account management (i.e., authentication, logins and passwords, and user roles and privileges), as well as the alternate processing site had not yet been implemented. Once implemented, the security controls over these components could have an impact on the information security and, therefore, may require a re-accreditation. Program officials chose the authority to operate accreditation because they asserted that the DHS Chief Information Security Officer does not allow interim authorizations. If these security activities are not completed, there is an increased risk that key security controls and requirements may not be fully developed, tested, implemented or documented. DHS and TSA have not developed a complete set of performance measures to assess the effectiveness of the redress process for passengers inconvenienced as a result of watch-list matching. Measuring performance allows organizations to track the progress they are making toward their goals and gives managers critical information on which to base decisions for improving their programs. DHS and TSA are developing additional measures for the redress process that they plan to implement when Secure Flight becomes operational. TSA, supported by the Terrorist Screening Center, provides opportunities for airline passengers to seek redress in cases where they experienced inconveniences during the check-in and screening processes due to the possibility they have been misidentified as being on or wrongly assigned to the terrorist watch-list. The redress process enables these individuals to file an inquiry to have erroneous information corrected in DHS systems that may prevent future delays and inconveniences at the airport. In February 2007, DHS established the Traveler Redress Inquiry Program (TRIP) to serve as the central processing point within the department for redress inquiries. TSA’s Office of Transportation Security Redress (OTSR) is responsible for reviewing redress inquiries submitted by air passengers through TRIP. According to a DHS official, in addition to handling redress applications, TRIP officials review, attempt to address, and respond to written complaint letters received from individuals who have gone through the redress process but are still experiencing screening issues. TRIP and OTSR’s redress program goals are to process redress applications as quickly and as accurately as possible. However, to measure program performance against these goals, TRIP and OTSR currently track only one measure for redress related to the timeliness of case completion, and do not track any performance measures related to program accuracy. Previous GAO work identified that agencies successful in evaluating performance had measures that used attributes from GAO’s best practices. Specifically, our previous work identified that agencies successful in evaluating performance had measures that demonstrated results, covered multiple priorities, provided useful information for decision making, and successfully addressed important and varied aspects of program performance. TRIP and OTSR officials stated that they do not plan to develop additional performance measures, such as measures related to accuracy of the redress process, but rather are awaiting the implementation of Secure Flight to determine the program’s impact on the redress process before creating additional measures. Secure Flight is intended to reduce the inconveniences experienced by air passengers by taking over from air carriers the responsibility for prescreening passengers in order to ensure consistent and effective use of the cleared list, which should impact the effectiveness of the redress process. In addition to TRIP and OTSR’s performance measures for the redress process, the Secure Flight program office is working with OTSR to develop redress performance measures for the Secure Flight Program. As we reported in February 2007, Secure Flight will use the TSA redress process that is currently available for individuals affected by the air carrier identity-matching processes. Secure Flight is coordinating with OTSR to determine how this process will be integrated with other Secure Flight requirements. Secure Flight and OTSR are jointly developing a set of performance measures and targets covering multiple priorities for redress that are to be implemented when Secure Flight becomes operational, and officials told us that they will follow best practices in the development of these measures. While we commend TSA for developing redress performance measures for the Secure Flight Program, since the program is not scheduled to be implemented until January 2009, DHS and OTSR’s current redress process lacks a complete set of measures with which they can assess performance and make program improvements. Since measures are often the key motivators of performance and goal achievement, the program’s overall success is at risk if all priorities are not addressed and information is not obtained to make future adjustments and improvements to the program. By developing and implementing measures that address all program goals now, to include measures related to program accuracy, DHS and TSA would have performance data that would allow them to better manage the redress process in place today, identify and correct any weaknesses, and help to ensure accountability towards the traveling public that the process is effective. Moreover, such performance data would provide a baseline against which to benchmark Secure Flight’s progress and planned improvements to the redress process. DHS and TSA have undertaken numerous initiatives to strengthen the security of the nation’s aviation system, and should be commended for these efforts. More specifically, TSA developed processes to more efficiently allocate and deploy the TSO workforce, strengthened screening procedures, is working to develop and deploy more effective screening technologies, strengthened the security of air cargo, and improved the development of a program to prescreen passengers against the terrorist watch-list. However, opportunities exist to further strengthen these efforts, in particular in the areas of risk management and program planning and monitoring. Our work has shown—in homeland security and in other areas—that a comprehensive risk management approach can help inform decision makers in the allocation of finite resources to the areas of greatest need. We are encouraged that risk management has been a cornerstone of DHS and TSA policy, and that TSA has implemented risk- based decision making into a number of its efforts. Despite this commitment, however, TSA will continue to face difficult decisions and trade-offs—particularly as threats to commercial aviation evolve— regarding acceptable levels of risk and the need to balance security with efficiency and customer service. We recognize that doing so will not be easy. In implementing a risk-based approach, DHS and TSA must also address the challenges we identified in our work related to program planning and monitoring. Without rigorous planning and monitoring, and knowledge of the effectiveness of aviation security programs implemented, DHS and TSA cannot be sure that they are focusing their finite resources on the areas of greatest need, and that security programs implemented are achieving their desired purpose. One area in which TSA has made considerable progress is in the development and implementation of the Secure Flight Program. Since we last reported on the program in February 2007, TSA has instilled more discipline and rigor into the systems development, and has completed key development and privacy protection activities. Despite this progress, however, it is important that TSA continue to work to strengthen the management of the program. TSA needs to take immediate and strong actions to keep the program on track and increase the likelihood that it will successfully implement Secure Flight on time, within budget and meeting all performance expectations. We found that TSA did not fully follow best practices for developing Secure Flight’s life-cycle cost and schedule estimates. The ability to generate reliable cost and schedule estimates is a critical function necessary to support the Office of Management and Budget capital programming process. Without adhering to these best practices in the development of its cost and schedule estimates, TSA is at risk of the Secure Flight Program experiencing cost overruns, missed deadlines, and performance shortfalls. In order to help inform management’s decisions regarding the program and assist them in providing effective program oversight, it is also important that TSA fully implement the provisions in the program’s risk management plan to include developing an inventory of risks and reporting the status of risks to management. TSA should also work to plan for complete end-to-end testing of the system to ensure that all interrelated components operate as intended, and strengthen key security controls and activities for the program, including ensuring that security requirements are tested and implemented, and that security documentation is maintained and updated. It is also important that TSA ensure that security risks are addressed in action plans, and that security risks are appropriately monitored so that the system is protected from unauthorized users and abuse. Finally, with respect to passenger redress, DHS and TSA should more thoroughly assess the effectiveness of the current redress process, to include the development of additional performance measures that assess program accuracy, a key goal of the program. To assist TSA in further strengthening the development and implementation of the Secure Flight program, we recommend that the Secretary of Homeland Security direct the Assistant Secretary of the Transportation Security Administration to take the following three actions: Fully incorporate best practices into the development of Secure Flight life-cycle cost and schedule estimates, to include: updating life-cycle cost and schedule estimates; demonstrating that the Secure Flight schedule has the logic in place to identify the critical path, integrates lower level activities in a logical manner, and identifies the level of confidence in meeting the desired end date; and developing and implementing a plan for managing and mitigating cost and schedule risks, including performing a schedule risk analysis and a cost and schedule risk assessment. Fully implement the provisions in the program’s risk management plan to include developing an inventory of risks with prioritization and mitigation strategies, report the status of risks and progress to management, and maintain documentation of these efforts. Finalize and approve Secure Flight’s end-to-end testing strategy, and incorporate end-to-end testing requirements in other relevant test plans, to include the test and evaluation master plan. The strategy and plans should contain provisions for: testing that ensures that the interrelated systems that collectively support Secure Flight will interoperate as intended in an operational environment; and defining and setting dates for key milestone activities and identifying who is responsible for completing each of those milestones and when. We further recommend that the Secretary of Homeland Security direct the TSA Chief Information Officer to take the following three actions regarding information security for the Secure Flight Program: coordinate with Secure Flight program officials to ensure security requirements are tested and implemented; maintain and update security documentation to align with the current or planned Secure Flight computing environment, including interconnection agreements, in support of certification and accreditation activities; and correct identified high and moderate risk vulnerabilities, as addressed in remedial action plans, and assess changes to the computing environment to determine whether re-accreditation of the system is warranted. Finally, to ensure that DHS is able to fully assess the effectiveness of the current redress process for passengers who may have been misidentified during the watch-list matching process, we recommend that the Secretary of Homeland Security and the Assistant Secretary of the Transportation Security Administration re-evaluate redress performance measures and consider creating and implementing additional measures that, consistent with best practices, demonstrate results, cover multiple priorities, and provide useful information for decision making. These measures should further address all program goals, to include the accuracy of the redress process. We provided a draft of information included in this statement related to our recently completed work on Secure Flight to DHS and TSA for review and comment. We incorporated technical changes to this statement based on TSA’s comments. In commenting on this information, DHS and TSA generally agreed with our recommendations. For further information on this testimony, please contact Cathleen A. Berrick at (202) 512-3404 or [email protected], or Gregory C. Wilshusen at (202) 512-6244 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. In addition to the contacts named above, Don Adams, Idris Adjerid, Kristy Brown, Chris Currie, Katherine Davis, John DeFerrari, Joe Dewechter, Jennifer Echard, Eric Erdman, Randolph Hite, James Houtz, Anne Laffoon, Thomas Lombardi, Gary Malavenda, Steve Morris, Sara Margraf, Vernetta Marquis, Vickie Miller, Gary Mountjoy, David Plocher, Jamie Pressman, Karen Richey, Karl Seifert, Maria Strudwick, Meg Ullengren, Margaret Vo, and Jenniffer Wilson made contributions to this testimony.
Transportation Security Administration (TSA) funding for aviation security has totaled about $26 billion since fiscal year 2004. This testimony focuses on TSA's efforts to secure the commercial aviation system through passenger screening, air cargo, and watch-list matching programs, and challenges remaining in these areas. GAO's comments are based on GAO products issued between February 2004 and April 2007, including selected updates in February 2008. This testimony also addresses TSA's progress in developing the Secure Flight program, based on work conducted from August 2007 to January 2008. To conduct this work, GAO reviewed systems development, privacy, and other documentation, and interviewed Department of Homeland Security (DHS), TSA, and contractor officials. DHS and TSA have undertaken numerous initiatives to strengthen the security of the nation's commercial aviation system, including actions to address many recommendations made by GAO. TSA has focused its efforts on, among other things, more efficiently allocating, deploying, and managing the Transportation Security Officer (TSO) workforce--formerly known as screeners; strengthening screening procedures; developing and deploying more effective and efficient screening technologies; strengthening domestic air cargo security; and developing a government operated watch-list matching program, known as Secure Flight. Specifically, TSA developed and implemented a Staffing Allocation Model to determine TSO staffing levels at airports that reflect current operating conditions, and proposed and implemented modifications to passenger checkpoint screening procedures based on risk information. However, GAO reported that some assumptions in TSA's Staffing Allocation Model did not accurately reflect airport operating conditions, and that TSA could improve its process for evaluating the effectiveness of proposed procedural changes. In response, TSA developed a plan to review Staffing Allocation Model assumptions and took steps to strengthen its evaluation of proposed procedural changes. TSA has also explored new passenger checkpoint screening technologies to better detect explosives and other threats and has taken steps to strengthen air cargo security, including conducting vulnerability assessments at airports and compliance inspections of air carriers. However, TSA has not developed an inspection plan that included performance goals and measures to determine whether air carriers transporting cargo into the United States were complying with security requirements. In response to GAO's recommendations, TSA has since established a working group to strengthen its compliance activities. Finally, TSA has instilled more discipline and rigor into Secure Flight's systems development, including preparing key documentation and strengthening privacy protections. While these efforts should be commended, GAO has identified several areas that should be addressed to further strengthen aviation security. For example, TSA has made limited progress in developing and deploying checkpoint technologies due to planning and management challenges. Further, TSA continues to face some program management challenges in developing Secure Flight. Specifically, TSA has not (1) developed program cost and schedule estimates consistent with best practices; (2) fully implemented its risk management plan; (3) planned for system end-to-end testing in test plans; and (4) ensured that information security requirements are fully implemented. If these challenges are not addressed effectively, the risk of the program not being completed on schedule and within estimated costs is increased, and the chances of it performing as intended are diminished. DHS and TSA lack performance measures to fully evaluate the effectiveness of current processes for passengers who apply for redress due to inconveniences experienced during the check-in and screening process. Without such measures, DHS and TSA lack a sound basis to monitor the effectiveness of the redress process.
Following the 2000 national elections, we performed a comprehensive series of reviews covering our nation’s election process, in which we identified a number of challenges. These reviews culminated in a capping report that summarized this work and provided the Congress with a framework for considering options for election administration reform. Our reports and framework were among the resources that the Congress drew on in enacting the Help America Vote Act (HAVA) of 2002, which provided guidance for fundamental election administration reform. Among other things, the act authorizes $3.86 billion in funding over several fiscal years for programs to replace punch card and mechanical lever voting equipment, improve election administration, improve accessibility, train poll workers, and perform research and pilot studies. It also created the EAC to oversee the election administration reform process. Since the act’s passage, a number of voting jurisdictions have replaced their older voting equipment with direct recording electronic systems. At the same time, concerns have been raised about the use of these systems; some critics have suggested, for example, that the security associated with the systems is not sufficient to ensure the integrity of the election process. In January 2004, the EAC began operations. On May 5, 2004, it held a public hearing to receive information on the use, security, and reliability of electronic voting devices. The hearing included panels of technology and standards experts, vendors of voting systems, state and local election administrators, and citizen advocacy groups. A major topic of the hearing was the security and reliability of touchscreen electronic voting systems. At the request of congressional leaders, committees, and members, we conducted an extensive body of work in the wake of the 2000 elections, which culminated in seven reports addressing a range of election-related topics. First, we reviewed the constitutional framework for the administration of elections, as well as major federal statutes enacted in this area. We reported that the constitutional framework for elections includes both state and federal roles. States are responsible for the administration of both their own elections and federal elections, but the Congress has enacted laws in several major areas of the voting process, including the timing of federal elections, voter registration, and absentee voting requirements. Congressional authority to legislate in this area derives from various constitutional sources, depending upon the type of election. For federal elections, the Congress has constitutional authority over both congressional and presidential elections. Second, we examined voting assistance for military and overseas voters. We reported that although tools are available for such voters, many potential voters were unaware of them, and many military and overseas voters believed it was challenging to understand and comply with state requirements and local procedures for absentee voting. In addition, although information was not readily available on the precise number of military and overseas absentee votes that were disqualified in the 2000 general election and the reasons for disqualification, we found through a national telephone survey that almost two-thirds of the disqualified absentee ballots were rejected because of lateness or errors in completion of the envelope or form accompanying the ballot. We recommended that the Secretaries of Defense and State improve (1) the clarity and completeness of service guidance, (2) voter education and outreach programs, (3) oversight and evaluation of voting assistance efforts, and (4) sharing of best practices. The Departments of Defense and State agreed with our overall findings and recommendations, and as of May 2004, the recommendations had largely been implemented. Third, we investigated whether minorities and disadvantaged voters were more likely to have their votes not counted because the voting method they used was less reliable than that of affluent white voters. According to our results, the state in which counties were located had more effect on the number of uncounted presidential votes than did counties’ demographic characteristics or voting method. State differences accounted for 26 percent of the total variation in uncounted presidential votes across counties. County demographic characteristics accounted for 16 percent of the variation (counties with higher percentages of minority residents tended to have higher percentages of uncounted presidential votes, while counties with higher percentages of younger and more educated residents tended to have lower percentages of uncounted presidential votes), and voting equipment accounted for 2 percent of the variation. Fourth, in a review of voting accessibility for voters with disabilities, we found that all states had provisions addressing voting by people with disabilities, but these provisions varied greatly. Federal law requires that voters with disabilities have access to polling places for federal elections, with some exceptions. All states provided for one or more alternative voting methods or accommodations intended to facilitate voting by people with disabilities. In addition, states and localities had made several efforts to improve voting accessibility for voters with disabilities, such as modifying polling places, acquiring new voting equipment, and expanding voting options, but state and county election officials surveyed cited various challenges to improving access. We concluded that given the limited availability of accessible polling places, other options that could allow more voters with disabilities to vote at a polling place on election day include reassigning them to other, more accessible polling places or creating accessible superprecincts in which voters from more than one precinct could all vote in the same building. Fifth, we reported on the status and use of voting equipment standards developed by the Federal Election Commission (FEC). These standards define minimum functional and performance requirements, as well as minimum life-cycle management processes for voting equipment developers to follow, such as quality assurance. At the time of our review, no federal agency had explicit statutory responsibility for developing the standards; however, the FEC developed voluntary standards for computer-based systems in 1990, and the Congress provided funding for this effort. Similarly, no federal agency was responsible for testing voting systems against the federal standards. Instead, the National Association of State Election Directors accredited independent test authorities to test voting systems against the standards. We noted, however, that the FEC standards had not been updated since 1990 and were consequently out of date. We suggested that the Congress consider assigning explicit federal authority, responsibility, and accountability for the standards, including their proactive and continuous update and maintenance; we also suggested that the Congress consider what, if any, federal role is appropriate regarding implementation of the standards, including the accreditation of independent test authorities and the qualification of voting systems. Both of these matters were addressed in the Help America Vote Act of 2002, which, among other things, set up the EAC to take responsibility for voluntary voting system guidelines. We also made recommendations to the FEC aimed at improving the guidelines. Before the EAC became operational, the FEC continued to update and maintain the guidelines, issuing a new version in 2002. Sixth, we issued a report on election activities and challenges across the nation. In this report, we described the operations and challenges associated with each stage of the election process, including voter registration; absentee and early voting; election day administration; and vote counts, certification, and recounts. The report also provided analyses on issues associated with voting systems that were used in the November 2000 elections and the potential use of the Internet for voting. Among other things, we pointed out that each of the major stages of an election depends on the effective interaction of people (the election officials and voters), processes (or internal controls), and technology (registration systems, election management systems, and voting systems). We also enumerated the challenges facing election officials at all stages of the election process. Finally, we issued a capping report that included a framework for evaluating election administration reform proposals. Among other things, we observed that the constitutional and operational division of federal and state authority to conduct elections had resulted in great variability in the ways that elections are administered in the United States. We concluded that given the diversity and decentralized nature of election administration, careful consideration needed to be given to the degree of flexibility and the planned time frames for implementing new initiatives. We also concluded that in order for election administration reform to be effective, reform proposals must address all major parts of our election system—its people, processes, and technology—which are interconnected and significantly affect the election process. And finally, we provided an analytical framework for the Congress to consider in deciding on changes to the overall election process. Enacted by the Congress in October 2002, the Help America Vote Act of 2002 addressed a range of election issues, including the lack of explicit federal (statutory) responsibility for developing and maintaining standards for electronic voting systems and for testing voting systems against standards. With the far-reaching goal of improving the election process in every state, the act affects nearly every aspect of the voting process, from voting technology to provisional ballots, and from voter registration to poll worker training. In particular, the act established a program to provide funds to states to replace punch card and lever machine voting equipment, established the EAC to assist in the administration of federal elections and provide assistance with the administration of certain federal election laws and programs, and established minimum election administration standards for the states and units of local government that are responsible for the administration of federal elections. In January 2004, the Congressional Research Service reported that disbursements to states for the replacement of older equipment and election administration improvements totaled $649.5 million. The act specifically tasked the EAC to serve as a national clearinghouse and resource for compiling election information and reviewing election procedures; for example, it is to conduct periodic studies of election administration issues to promote methods of voting and administration that are most convenient, accessible, and easy to use for all voters. Other examples of EAC responsibilities include ● developing and adopting voluntary voting system guidelines, and maintaining information on the experiences of states in implementing the guidelines and operating voting systems; ● testing, certifying, decertifying, and recertifying voting system hardware and software through accredited laboratories; ● making payments to states to help them improve elections in the areas of voting systems standards, provisional voting and voting information requirements, and computerized statewide voter registration lists; and ● making grants for research on voting technology improvements. According to the act, reporting to the EAC will be the Technical Guidelines Development Committee, which will make recommendations on voluntary voting system guidelines. The National Institute of Standards and Technology (NIST) will provide technical support to the development committee, and the NIST Director will serve as its chairman. In December 2003, the EAC commissioners were appointed, and the EAC began operations in January 2004. According to the commission chairman, the EAC’s fiscal year 2004 budget is $1.2 million, and its near-term plans focus on complying with requirements established in HAVA, including issuing a report to the Congress on the status of election administration reform. The commission’s longer term plans include a focus on developing best practices that can be shared across the election community, updating the voluntary voting system guidelines, and improving the process for independent testing of voting systems. Commissioners also told us that current operations are constrained by a lack of persons in key staff positions, including the Executive Director, General Counsel, and Inspector General. In the United States today, most votes are cast and counted by one of two types of electronic voting systems: optical scan and direct recording electronic (DRE). For a small minority of registered voters (about 1 percent in the 2000 elections), votes are cast and counted manually on paper ballots. Two older voting technologies were also used in the 2000 elections: punch card equipment (used by 31 percent of registered voters in 2000) and mechanical lever voting machines (used by 17 percent of voters in 2000). These equipment types are being replaced as required by provisions established in HAVA. Optical scan voting systems use electronic technology to tabulate paper ballots. Although optical scan technology has been in use for decades for such tasks as scoring standardized tests, it was not applied to voting until the 1980s. In 2000, about 31 percent of registered voters voted on optical scan systems. For voting, an optical scan system is made up of computer-readable ballots, appropriate marking devices, privacy booths, and a computerized tabulation device. The ballot, which can be of various sizes, lists the names of the candidates and the issues. Voters record their choices using an appropriate writing instrument to fill in boxes or ovals, or to complete an arrow next to the candidate’s name or the issue. The ballot includes a space for write-ins to be placed directly on the ballot. Optical scan ballots are tabulated by optical-mark-recognition equipment (see fig. 1), which counts the ballots by sensing or reading the marks on the ballot. Ballots can be counted at the polling place—this is referred to as precinct-count optical scan—or at a central location. If ballots are counted at the polling place, voters or election officials put the ballots into the tabulation equipment, which tallies the votes; these tallies can be captured in removable storage media that are transported to a central tally location, or they can be electronically transmitted from the polling place to the central tally location. If ballots are centrally counted, voters drop ballots into sealed boxes, and election officials transfer the sealed boxes to the central location after the polls close, where election officials run the ballots through the tabulation equipment. Software instructs the tabulation equipment to assign each vote (i.e., to assign valid marks on the ballot to the proper candidate or issue). In addition to identifying the particular contests and candidates, the software can be configured to capture, for example, straight party voting and vote-for-no-more-than-N contests. Precinct-based optical scanners can also be programmed to detect overvotes (where the voter votes for two candidates for one office, for example, invalidating the vote) and undervotes (where the voter does not vote for all contests or issues on the ballot) and to take some action in response (rejecting the ballot, for instance). In addition, optical scan systems often use vote-tally software to tally the vote totals from one or more vote tabulation devices. If election officials program precinct-based optical scan systems to detect and reject overvotes and undervotes, voters can fix their mistakes before leaving the polling place. However, if voters are unwilling or unable to correct their ballots, a poll worker can manually override the program and accept the ballot, even though it has been overvoted or undervoted. If ballots are tabulated centrally, voters do not have the opportunity to correct mistakes that may have been made. First introduced in the 1970s, DREs capture votes electronically, without the use of paper ballots. In the 2000 election, about 12 percent of voters used this type of technology. DREs come in two basic types, pushbutton or touchscreen, the pushbutton being the older technology; during the 2000 elections, pushbutton DREs were the most prevalent of the two types. The two types vary considerably in appearance (see fig. 2). Pushbutton DREs are larger and heavier than touchscreens. Pushbutton and touchscreen units also differ significantly in the way they present ballots to the voter. With the pushbutton, all ballot information is presented on a single “full-face” ballot. For example, a ballot may have 50 buttons on a 3 by 3 foot ballot, with a candidate or issue next to each button. In contrast, touchscreen DREs display the ballot information on an electronic display screen. For both pushbutton and touchscreen types, the ballot information is programmed onto an electronic storage medium, which is then uploaded to the machine. For touchscreens, ballot information can be displayed in color and can incorporate pictures of the candidates. Because the ballot space on a touchscreen is much smaller than on a pushbutton machine, voters who use touchscreens must page through the ballot information. Both touchscreen and pushbutton DREs can accommodate multilingual ballots. Despite the differences, the two types have some similarities, such as how the voter interacts with the voting equipment. For pushbuttons, voters press a button next to the candidate or issue, which then lights up to indicate the selection. Similarly, voters using touchscreens make their selections by touching the screen next to the candidate or issue, which is then highlighted. When voters are finished making their selections on a touchscreen or a pushbutton DRE, they cast their votes by pressing a final “vote” button or screen. Until they hit this final button or screen, voters can change their selections. Both types allow voters to write in candidates. While most DREs allow voters to type write-ins on a keyboard, some pushbutton types require voters to write the name on paper tape that is part of the device. Although DREs do not use paper ballots, they do retain permanent electronic images of all the ballots, which can be stored on various media, including internal hard disk drives, flash cards, or memory cartridges. According to vendors, these ballot images, which can be printed, can be used for auditing and recounts. Some of the newer DREs use smart card technology as a security feature. Smart cards are plastic devices—about the size of a credit card—that use integrated circuit chips to store and process data, much like a computer. Smart cards are generally used as a means to open polls and to authorize voter access to ballots. For instance, smart cards on some DREs store program data on the election and are used to help set up the equipment; during setup, election workers verify that the card received is for the proper election. Other DREs are programmed to automatically activate when the voter inserts a smart card; the card brings up the correct ballot onto the screen. In general, the interface with the voter is very similar to that of an automatic teller machine. Like optical scan devices, DREs require the use of software to program the various ballot styles and tally the votes, which is generally done through the use of memory cartridges or other media. The software is used to generate ballots for each precinct within the voting jurisdiction, which includes defining the ballot layout, identifying the contests in each precinct, and assigning candidates to contests. The software is also used to configure any special options, such as straight party voting and vote-for-no-more- than-N contests. In addition, for pushbutton types, the software assigns the buttons to particular candidates and, for touchscreens, the software defines the size and location on the screen where the voter makes the selection. Vote-tally software is often used to tally the vote totals from one or more units. DREs offer various configurations for tallying the votes. Some contain removable storage media that can be taken from the voting device and transported to a central location to be tallied. Others can be configured to electronically transmit the vote totals from the polling place to a central tally location. DREs are designed not to allow overvotes; for example, if a voter selects a second choice in a two-way race, the first choice is deselected. In addition to this standard feature, different types offer a variety of options, including many aimed at voters with disabilities, that jurisdictions may choose to purchase. In our 2001 work, we cited the following features as being offered in some models of DRE: ● A “no-vote” option. This option helps avoid unintentional undervotes. This provides the voter with the option to select “no vote (or abstain)” on the display screen if the voter does not want to vote on a particular contest or issue. ● A “review” feature. This feature requires voters to review each page of the ballot before pressing the button to cast the vote. ● Visual enhancements. Visual enhancements include color highlighting of ballot choices, candidate pictures, etc. ● Accommodations for voters with disabilities. Examples of options for voters who are blind include Braille keyboards and audio interfaces. At least one vendor reported that its DRE accommodates voters with neurological disabilities by offering head movement switches and “sip and puff” plug-ins. Another option is voice recognition capability, which allows voters to make selections orally. ● An option to recover spoiled ballots. This feature allows voters to recast their votes after their original ballots are cast. For this option, every DRE at the poll site would be connected to a local area network. A poll official would void the original “spoiled” ballot through the administrative workstation that is also connected to the local area network. The voter could then cast another ballot. ● An option to provide printed receipts. In this case, the voter would receive a paper printout or ballot when the vote is cast. This feature is intended to provide voters and/or election officials with an opportunity to check what is printed against what is recorded and displayed. It is envisioned that procedures would be in place to retrieve the paper receipts from the voters so that they could not be used for vote selling. Some DREs also have an infrared “presence sensor” that is used to control the receipt printer in the event the voter is allowed to keep the paper receipt; if the voter leaves without taking the receipt, the receipt is pulled back into the printer. As older voting equipment has been replaced with newer electronic voting systems over the last 2 years, the debate has shifted from hanging chads and butterfly ballots to vulnerabilities associated with DREs. Problems with these devices in recent elections have arisen in various states. For example: ● Six DRE units used in two North Carolina counties lost 436 ballots cast in early voting for the 2002 general election because of a software problem, according to a February 9, 2004, report in Wired News. The manufacturer said that problems with the firmware of its touchscreen machines led to the lost ballots. The state was trying out the machines in early voting to determine if it wanted to switch from the optical scan machines it already owned to the new touchscreen systems. ● According to a January 2004 report in Wired News, blank ballots were recorded for 134 voters who signed in and cast ballots in Broward County, Florida. These votes represented about 1.3 percent of the more than 10,000 people who voted in the race for a state house representative. ● USA Today reported that four California counties suffered from problems with DREs in a March 2004 election, including miscounted ballots, delayed polling place openings, and incorrect ballots. In San Diego County, about one-third of the county’s polling places did not open on time because of battery problems caused by a faulty power switch. Additionally, questions are being raised about the security of DREs. Critics suggest that their use could compromise the integrity of the election process and that these devices need auditing mechanisms, such as receipt printers that would provide a paper audit trail and allow voters to confirm their choices. Among these critics are computer scientists, citizens groups, and legislators. For example, computer scientists from Johns Hopkins and Rice Universities released a security analysis of software from a DRE of a major vendor, concluding that the code had serious security flaws that could permit tampering. Other computer scientists, while agreeing that the code contained security flaws, criticized the study for not recognizing how standard election procedures can mitigate these weaknesses. Following the Johns Hopkins and Rice study, Maryland contracted with both SAIC and RABA Technologies to study the same DRE equipment. The SAIC study found that the equipment, as implemented in Maryland, poses a security risk. Similarly, RABA identified vulnerabilities associated with the equipment. An earlier Caltech/MIT study noted that despite security strengths of the election process in the United States , current trends in electronic voting are weakening those strengths and introducing risks; according to this study, properly designed and implemented electronic voting systems could actually improve, rather than diminish, security. Citizen advocacy groups are also taking action. For example, according to an April 21, 2004, press release from the Campaign for Verifiable Voting in Maryland, the group filed a lawsuit against the Maryland State Board of Elections to force election officials to decertify the DRE machines used in Maryland until the manufacturer remedies security vulnerabilities and institutes a paper audit trail. Legislators and other officials are also responding to the issues. In at least 20 states, according to the Associated Press, legislation has been introduced requiring a paper record of every vote cast. Following the problems in California described above, the California Secretary of State banned the use of more than 14,000 touchscreen DREs and conditionally decertified 28,000 others. According to a New York Times article, he also recommended that the state Attorney General consider taking civil and criminal action against the manufacturer for “fraudulent actions.” The decision followed the recommendations of the state’s Voting Systems and Procedures Panel, which urged the Secretary of State to prohibit the four counties that experienced difficulties from using their touchscreen units in the November 2004 election, according to an Associated Press article. The panel reported that the manufacturer did not obtain federal approval of the model used in the four affected counties and installed software that had not been approved by the Secretary of State. It also noted that problems with the systems prevented an unspecified number of voters from casting ballots. In addition, two California state senators have drafted a bill to prohibit the use of any DRE voting system without a paper trail in the 2004 general election; they planned to introduce the bill if the Secretary of State did not act. Electronic voting systems represent one of many important components in the overall election process. This process is made up of several stages, with each stage consisting of key people, process, and technology variables. Many levels of government are involved, including over 10,000 jurisdictions with widely varying characteristics. In the U.S. election process, all levels of government share responsibility. At the federal level, the Congress has authority under the Constitution to regulate presidential and congressional elections and to enforce prohibitions against specific discriminatory practices in all elections—federal, state, and local. It has passed legislation affecting the administration of state elections that addresses voter registration, absentee voting, accessibility provisions for the elderly and handicapped, and prohibitions against discriminatory practices. The Congress does not have general constitutional authority over the administration of state and local elections. At the state level, the states are responsible for the administration of both their own elections and federal elections. States regulate the election process, including, for example, adoption of voluntary voting system guidelines, testing of voting systems, ballot access, registration procedures, absentee voting requirements, establishment of voting places, provision of election day workers, and counting and certification of the vote. In fact, the U.S. election process can be seen as an assemblage of 51 somewhat distinct election systems—those of the 50 states and the District of Columbia. Further, although election policy and procedures are legislated primarily at the state level, states typically have decentralized this process so that the details of administering elections are carried out at the city or county levels, and voting is done at the local level. As we reported in 2001, local election jurisdictions number more than 10,000, and their size varies enormously—from a rural county with about 200 voters to a large urban county such as Los Angeles County, where the total number of registered voters for the 2000 elections exceeded the registered voter totals in 41 states. The size of a voting jurisdiction significantly affects the complexity of planning and conducting the election, as well as the method used to cast and count votes. In our 2001 work, we quoted the chief election official in a very large voting jurisdiction: “the logistics of preparing and delivering voting supplies and equipment to the county’s 4,963 voting precincts, recruiting and training 25,000 election day poll workers, preparing and mailing tens of thousands of absentee ballot packets daily and later signature verifying, opening and sorting 521,180 absentee ballots, and finally, counting 2.7 million ballots is extremely challenging.” The specific nature of these challenges is affected by the voting technology that the jurisdiction uses. For example, jurisdictions using DRE systems may need to manage the electronic transmission of votes or vote counts; jurisdictions using optical scan technology need to manage the paper ballots that this technology reads and tabulates. Jurisdictions using optical scan technology may also need to manage electronic transmissions if votes are counted at various locations and totals are electronically transmitted to a central tally point. Another variable is the diversity of languages within a jurisdiction. In November 2000, Los Angeles County, for instance, provided ballots in Spanish, Chinese, Korean, Vietnamese, Japanese, and Tagalog, as well as English. No matter what technology is used, jurisdictions may need to provide ballot translations; however, the logistics of printing paper materials in a range of languages, as would be required for optical scan technology, is different from the logistics of programming translations into DRE units. Some states do have statewide election systems so that every voting jurisdiction uses similar processes and equipment, but others do not. For instance, we reported in 2001 that in Pennsylvania, local election officials told us that there were 67 counties and consequently 67 different ways of handling elections. In some states, state law prescribes the use of common voting technology throughout the state, while in other states local election officials generally choose the voting technology to be used in their precincts, often from a list of state-certified options. Whatever the jurisdiction and its specific characteristics, administering an election is a year-round activity, involving varying sets of people to carry out processes at different stages. These stages generally consist of the following: ● Voter registration. Among other things, local election officials register eligible voters and maintain voter registration lists, including updates to registrants’ information and deletions of the names of registrants who are no longer eligible to vote. ● Absentee and early voting. This type of voting allows eligible persons to vote in person or by mail before election day. Election officials must design ballots and other systems to permit this type of voting, as well as educating voters on how to vote by these methods. ● The conduct of an election. Election administration includes preparation before election day, such as local election officials arranging for polling places, recruiting and training poll workers, designing ballots, and preparing and testing voting equipment for use in casting and tabulating votes, as well as election day activities, such as opening and closing polling places and assisting voters to cast votes. ● Vote counting. At this stage, election officials tabulate the cast ballots; determine whether and how to count ballots that cannot be read by the vote counting equipment; certify the final vote counts; and perform recounts, if required. As shown in figure 3, each stage of an election involves people, processes, and technology. Electronic voting systems are primarily involved in the last two stages, during which votes are cast and counted. However, the type of system that a jurisdiction uses may affect earlier stages. For example, in a jurisdiction that uses optical scan systems, paper ballots like those used on election day may be mailed in the absentee voting stage. On the other hand, a jurisdiction that uses DRE technology would have to make a different provision for absentee voting. Although the current debate concerning electronic voting systems primarily relates to security, other factors affecting election administration are also relevant in evaluating these systems. Ensuring the security of elections is essential to public confidence and election integrity, but officials choosing a voting system must also consider other performance factors, such as accuracy, ease of use, and efficiency, as well as cost. Accuracy refers to how frequently the equipment completely and correctly records and counts votes; ease of use refers to how understandable and accessible the equipment is to a diverse group of voters and to election workers; and efficiency refers to how quickly a given vote can be cast and counted. Finally, equipment’s life-cycle cost versus benefits is an overriding practical consideration. In conducting elections, officials must be able to assure the public that the confidentiality of the ballot is maintained and fraud prevented. In providing this assurance, the people, processes, and technology involved in the election system all play a role: the security procedures and practices that jurisdictions implement, the security awareness and training of the election workers who execute them, and the security features provided by the systems. Election officials are responsible for establishing and managing privacy and security procedures to protect against threats to the integrity of elections. These security threats include potential modification or loss of electronic voting data; loss, theft, or modification of physical ballots; and unauthorized access to software and electronic equipment. Physical access controls are required for securing voting equipment, vote tabulation equipment, and ballots; software access controls (such as passwords and firewalls) are required to limit the number of people who can access and operate voting devices, election management software, and vote tabulation software. In addition, election processes are designed to ensure privacy by protecting the confidentiality of the vote: physical screens are used around voting stations, and poll workers are present to prevent voters from being watched or coerced while voting. Examples of security controls that are embedded in the technology include the following: ● Access controls. Election workers may have to enter user names and passwords to access voting systems and software, so that only authorized users can make modifications. On election day, voters may need to provide a smart card or token to DRE units. ● Encryption. To protect the confidentiality of the vote, DREs use encryption technology to scramble the votes cast so that the votes are not stored in the same order in which they were cast. In addition, if vote totals are electronically transmitted, encryption is used to protect the vote count from compromise by scrambling it before it is transmitted over telephone wires and unscrambling it once it is received. ● Physical controls. Hardware locks and seals protect against unauthorized access to the voting device once it has been prepared for the election (e.g., once the vote counter is reset, the unit is tested, and ballots are prepared). ● Audit trails. Audit trails provide documentary evidence to recreate election day activity, such as the number of ballots cast (by each ballot configuration or type) and candidate vote totals for each contest. Audit trails are used for verification purposes, particularly in the event that a recount is demanded. With optical scan systems, the paper ballots provide an audit trail. Since not all DREs provide a paper record of the votes, election officials may rely on the information that is collected by the DRE’s electronic memory. Part of the debate over the assurance of integrity that DREs provide revolves around the reliability of this information. ● Redundant storage. Redundant storage media in DREs provide backup storage of votes cast or vote counts to facilitate recovery of voter data in the event of power or system failure. The particular features offered by DRE and optical scan equipment differ by vendor make and model as well as the nature of the technology. DREs generally offer most of the features, but there is debate about the adequacy of the access controls and the audit trails that this technology provides. If DREs use tokens or smart cards to authenticate voters, these tokens must also be physically protected and may require software security protection. For optical scan systems, redundant storage media may not be required, but software and physical access controls may be associated with tabulation equipment and software, and if vote tallies are transmitted electronically, encryption may also be used. In addition, since these systems use paper ballots, the audit trail is clearer, but physical access to ballots after they are cast must be controlled. The physical and process controls used to protect paper ballots include ballot boxes as well as the procedures implemented to protect the boxes if they need to be transported, to tabulate ballots, and to store counted ballots for later auditing and possible recounts. Ensuring that votes are accurately recorded and tallied is an essential attribute of any voting equipment. Without such assurance, both voter confidence in the election and the integrity and legitimacy of the outcome of the election are at risk. The importance of an accurate vote count increases with the closeness of the election. Both optical scan and DRE systems are claimed to be highly accurate. In 2001, our vendor survey showed virtually no differences in vendor representations of the accuracy of DRE and optical scan voting equipment, measured in terms of how accurately the equipment counted recorded votes. Vendors of optical scan equipment reported accuracy rates of between 99 and 100 percent, with vendors of DREs reporting 100 percent accuracy. As we reported in 2001, although 96 percent of local election jurisdictions were satisfied with the performance of their voting equipment during the 2000 election, according to our mail survey, only about 48 percent of jurisdictions nationwide collected data on the accuracy of their voting equipment for the election. Further, it was unclear whether jurisdictions actually had meaningful performance data. Of those local election jurisdictions that we visited that stated that their voting equipment was 100 percent accurate, none was able to provide actual data to substantiate these statements. Similarly, according to our mail survey, only about 51 percent of jurisdictions collected data on undervotes, and about 47 percent collected data on overvotes for the November 2000 election. Although voting equipment may be designed to count votes as recorded with 100 percent accuracy, how frequently the equipment counts votes as intended by voters is a function not only of equipment design, but also of the interaction of people and processes. These people and process factors include whether, for example, technicians have followed proper procedures in testing and maintaining the system, ● voters followed proper procedures when using the system, ● election officials have provided voters with understandable procedures to follow, and ● poll workers properly instructed and guided voters. As indicated earlier, various kinds of errors can lead to voter intentions not being captured when ballots are counted. Avoiding or compensating for these errors may involve solutions based on technology, processes, or both. For example, DREs are designed to prevent overvoting; however, overvoting can also be prevented by a procedure to check optical scan ballots for overvotes before the voter leaves the polls, which can be accomplished by a precinct- based tabulator or by other means. Like accuracy, ease of use (or user friendliness) largely depends on how voters interact with the voting system, physically and intellectually. This interaction, commonly referred to as the human/machine interface, is a function of the system design, the processes established for its use, and user education and training. Among other things, how well jurisdictions design ballots and educate voters on the use of voting equipment affects how easy voters find the system to use. In the 2000 elections, for example, ballots for some optical scan systems were printed on both sides, so that some voters failed to vote one of the sides. This risk could be mitigated by clear ballot design and by explicit instructions, whether provided by poll workers or voter education materials. Thus, ease of use affects accuracy (i.e., whether the voter’s intent is captured), and it can also affect the efficiency of the voting process (confused voters take longer to vote). Accessibility to diverse types of voters, including those with disabilities, is a further aspect of ease of use. As described earlier, DREs offer more options for voters with disabilities, as they can be equipped with a number of aids to voters with disabilities. However, these options increase the expense of the units, and not all jurisdictions are likely to opt for them. Instead of technological solutions, jurisdictions may establish special processes for voters with disabilities, such as allowing them to be assisted to cast their votes; this workaround can, however, affect the confidentiality of the vote. Efficiency—the speed of casting and tallying votes—is an important consideration for jurisdictions not only because it influences voter waiting time and thus potentially voter turnout, but also because it affects the number of voting systems that a jurisdiction needs to acquire and maintain, and thus the cost. Efficiency can be measured in terms of the number of people that the equipment can accommodate within a given time, how quickly the equipment can count votes, and the length of time that voters need to wait. With DREs, the vote casting and counting functions are virtually inseparable, because the ballot is embedded in the voting equipment. Accordingly, for DREs efficiency is generally measured in terms of the number of voters that each machine accommodates on election day. In 2001, vendors reported that the number of voters accommodated per DRE ranges from 200 to 1,000 voters per system per election day. With optical scan systems, in contrast, vote casting and counting are separate activities, since the ballot is a separate medium—a sheet of paper or a computer card—which once completed is put into the vote tabulator. As a result, the efficiency of optical scan equipment is generally measured in terms of the speed of count (i.e., how quickly the equipment counts the votes on completed ballots). Complicating this measurement is the fact that efficiency differs depending on whether central-count or precinct-based tabulators are used. Central-count equipment generally counts more ballots per hour because it is used to count the ballots for an entire jurisdiction, rather than an individual polling site. For central-count optical scan equipment, 10 vendors reported speed of count ranges from 9,000 to 24,000 ballots per hour. For precinct-count optical scan equipment, vendors generally did not provide specific speed of count data, but they stated that one machine is generally used per polling site. Generalizations about the effect of technology on wait times are difficult. In 2001, our mail survey found that 84 percent of jurisdictions nationwide were satisfied with the amount of voter wait time at the polling place during the November 2000 election, but that 13 percent of jurisdictions considered long lines at the polling places to be a major problem. However, we estimated that only 10 percent of jurisdictions nationwide collected information on the average amount of time that it took voters to vote. We were told by some jurisdictions that the length of time voters must wait is affected by ballots that include many races and issues. Some jurisdictions reported that their ballots were so long that it took voters a long time in the voting booth to read them and vote. As a result, lines backed up, and some voters had to wait for over an hour to cast their votes. Officials in one jurisdiction said that their voters experienced long wait times in part because redistricting caused confusion among voters, who often turned up at the wrong polling places. As these examples show, the voting system used is not always a major factor in voter wait times. However, processes that do depend on the system may affect the time that a voter must spend voting. For example, in precincts that use precinct-level counting technology for optical scan ballots, voters may place their ballots in the automatic feed slot of the tabulator. This process can add to voting time if the tabulator is designed to reject ballots that are undervoted, overvoted, or damaged, and the voter is given the opportunity to correct the ballot. Generally, buying DRE units is more expensive than buying optical scan systems. For a broad picture, consider the comparison that we made in 2001 of the costs of purchasing new voting equipment for local election jurisdictions based on three types of equipment: central-count optical scan equipment, precinct-count optical scan equipment, and touchscreen DRE units. Based on equipment cost information available in August 2001, we estimated that purchasing optical scan equipment that counted ballots at a central location would cost about $191 million. Purchasing an optical scan counter for each precinct that could notify voters of errors on their ballots would cost about $1.3 billion. Purchasing touchscreen DRE units for each precinct, including at least one unit per precinct that could accommodate blind, deaf, and paraplegic voters, would cost about $3 billion. For a given jurisdiction, the particular cost involved will depend on the requirements of the jurisdiction, as well as the particular equipment chosen. Voting equipment costs vary among types of voting equipment and among different manufacturers and models of the same type of equipment. For example, in 2001, DRE touchscreen unit costs ranged from $575 to $4,500. Similarly, unit costs for precinct-count optical scan equipment ranged from $4,500 to $7,500. Among other things, these differences can be attributed to differences in what is included in the unit cost as well as differences in the characteristics of the equipment. In addition to the equipment unit cost, an additional cost for jurisdictions is the software that operates the equipment, prepares the ballots, and tallies the votes (and in some cases, prepares the election results reports). Our vendor survey showed that although some vendors included the software cost in the unit cost of the voting equipment, most priced the software separately. Software costs for DRE and optical scan equipment could run as high as $300,000 per jurisdiction. The higher costs were generally for the more sophisticated software associated with election management systems. Because the software generally supported numerous equipment units, the software unit cost varied depending on the number of units purchased or the size of the jurisdiction. Other factors affecting the acquisition cost of voting equipment are the number and types of peripherals required. In general, DREs require more peripherals than do optical scan systems, which adds to their expense. For example, some DREs require smart cards, smart card readers, memory cartridges and cartridge readers, administrative workstations, and plug-in devices (for increasing accessibility for voters with disabilities). Touchscreen DREs may also offer options that affect the cost of the equipment, such as color versus black and white screens. In addition, most DREs and all optical scan units require voting booths, and most DREs and some precinct-based optical scan tabulators offer options for modems. Precinct-based optical scan tabulators also require ballot boxes to capture the ballots after they are scanned. Once jurisdictions acquire the voting equipment, they must also incur the cost to operate and maintain it, which can vary considerably. For example, in 2001, jurisdictions that used DREs reported a range of costs from about $2,000 to $27,000. Similarly, most jurisdictions that used optical scan equipment reported that operations and maintenance costs ranged from about $1,300 to $90,000. The higher ends of these cost ranges generally related to the larger jurisdictions. In fact, one large jurisdiction that used optical scan equipment reported that its operating costs were $545,000. In addition, the jurisdictions reported that these costs generally included software licensing and upgrades, maintenance contracts with vendors, equipment replacement parts, and supply costs. For decisions on whether to invest in new voting equipment, both initial capital costs (i.e., cost to acquire the equipment) and long- term support costs (i.e., operation and maintenance costs) are relevant. Moreover, these collective costs (i.e., life-cycle costs) need to be viewed in the context of the benefits the equipment will provide over its useful life. It is advisable to link these benefits directly to the performance characteristics of the equipment and the needs of the jurisdiction. The performance of any information technology system, including electronic voting systems, is heavily influenced by a number of factors, not the least of which is the quality of the system’s design and the effectiveness with which the system is implemented in an operational setting. System design and implementation, in turn, are a function of such things as how well the system’s requirements are defined, how well the system is tested, and how well the people that operate and use the system understand and follow the procedures that govern their interaction with it. Our work in 2001 raised concerns about the FEC’s voting system standards, and showed that practices relative to testing and implementation of voting systems varied across states and local jurisdictions. Like that of any information technology product, the design of a voting system starts with the explicit definition of what the system is to do and how well it is to do it. These requirements are then translated into design specifications that are used to develop the system. Organizations such as the Department of Defense and the Institute of Electrical and Electronics Engineers have developed guidelines for various types of systems requirements and for the processes that are important to managing the development of any system throughout its life cycle. These guidelines address types of product requirements (e.g., functional and performance), as well as documentation and process requirements governing the production of the system. In the case of voting systems, the FEC had assumed responsibility for issuing standards that embodied these requirements, a responsibility that HAVA has since assigned to the EAC. The FEC standards are nevertheless still the operative standards until the EAC updates them. These FEC-issued standards apply to system hardware, software, firmware, and documentation, and they span prevoting, voting, and postvoting activities. They also address, for example, requirements relating to system security; system accuracy and integrity; system auditability; system storage and maintenance; and data retention and transportation. In addition to these standards, some states and local jurisdictions have specified their own voting system requirements. In 2001, we cited a number of problems with the FEC-issued voting system standards, including missing elements of the standards. Accordingly, we made recommendations to improve the standards. Subsequently, the FEC approved the revised voting system standards on April 30, 2002. According to EAC commissioners with whom we spoke, the commission has inherited the FEC standards, but it plans to work with NIST to revise and strengthen them. To ensure that systems are designed and built in conformance with applicable standards, our work in 2001 found that three levels of tests are generally performed: qualification tests, certification tests, and acceptance tests. For voting systems, the FEC-issued standards called for qualification testing to be performed by independent testing authorities. According to the standards, this testing is to ensure that voting systems comply with both the FEC standards and the systems’ own design specifications. State standards define certification tests, which the states generally perform to determine how well the systems conform to individual state laws, requirements, and practice. Finally, state and local standards define acceptance testing, performed by the local jurisdictions procuring the voting systems. This testing is to determine whether the equipment, as delivered and installed, satisfies all the jurisdiction’s functional and performance requirements. Beyond these levels of testing, jurisdictions also perform routine maintenance and diagnostic activities to further ensure proper system performance on election day. Our 2001 work found that the majority of states (38) had adopted the FEC standards then in place, and thus these states required that the voting systems used in their jurisdictions passed qualification testing. In addition, we reported that qualified voting equipment had been used in about 49 percent (±7 percentage points) of jurisdictions nationwide that used DREs and about 46 percent (±7 percentage points) of jurisdictions nationwide that used optical scan technology. However, about 46 percent (±5 percentage points) reported that they did not know whether their equipment had been qualified. As we reported in 2001, 45 states and the District of Columbia told us that they had certification testing programs, and we estimate from our mail survey that about 90 percent of jurisdictions used state-certified voting equipment in the 2000 national election. In addition, we reported that most of the jurisdictions that had recently bought new voting equipment had conducted some form of acceptance testing. However, the processes and steps performed and the people who performed them varied. For example, in one jurisdiction that purchased DREs, election officials stated that testing consisted of a visual inspection, power-up, opening of polls, activation and verification of ballots, and closing of polls. In contrast, officials in another jurisdiction stated that they relied entirely on the vendor to test their DREs. In jurisdictions that used optical scan equipment, acceptance testing generally consisted of running decks of test cards. For example, officials from one jurisdiction stated that they tested each unit with the assistance of the vendor using a vendor-supplied test deck. Our 2001 work found that the processes and people involved in routine system maintenance, diagnostic, and pre-election day checkout activities varied from jurisdiction to jurisdiction. For example, about 90 percent of jurisdictions nationwide using DRE and optical scan technology had performed routine or manufacturer-suggested maintenance and checkout before the 2000 national election. However, our visits to 27 local election jurisdictions revealed variations in the frequency with which jurisdictions performed such routine maintenance. For example, some performed maintenance right before an election, while others performed maintenance regularly throughout the year. For example, officials in one jurisdiction that used DREs stated that they tested the batteries monthly. Proper implementation of voting systems is a matter of people knowing how to carry out appropriately designed processes to ensure that the technology performs as intended in an operational setting. According to the EAC commissioners, one of their areas of focus will be election administration processes and the people who carry out these processes. Examples include ballot preparation, voter education, recruiting and training poll workers, setting up the polls, running the election, and counting the votes. Ballot preparation. Whether ballots are electronic or paper, they need to be designed in a way that promotes voter understanding when they are actually used. Designing both optical scan and DRE ballots requires consideration of the different types of human interaction entailed and the application of some human factors expertise. For DREs, programming skills need to be applied to create the ballot and enter the ballot information onto an electronic storage medium, which is then uploaded to the unit. For optical scan systems, paper ballots need to be designed and printed in specified numbers for distribution to polling places; they may also be used for absentee balloting, usually in combination with printed mailing envelopes. Electronic “ballots” in DRE units do not require distribution separate from the distribution of the voting equipment itself; however, the use of DREs means that a separate technique is necessary for absentee ballots—generally paper ballots. Thus, the use of these units generally requires a mixed election system. Voter education. Implementation of any voting method requires that voters understand how to vote—that is, what conventions are followed. For optical scan systems, voters need to understand how to mark the ballots, they need to know what kinds of marker (type of pen or pencil) can be used, they need to be informed if a ballot must be marked on both sides, and so on. For DRE systems, voters need to understand how to select candidates or issues and understand that their votes are not cast until the cast vote button is pressed; for touchscreens, they need to know how to navigate the various screens presented to them. Voters also need to understand the procedure for write-in votes. In 2001, one jurisdiction had an almost 5 percent overvote rate because voters did not understand the purpose of the ballot section permitting write-in votes. Voters selected a candidate on the ballot and then wrote the candidate’s name in the write-in section of the ballot, thus overvoting and spoiling the ballot. In addition to voter education, how the system is programmed to operate can also address this issue. For example, precinct-count optical scan equipment can be programmed to return a voter’s ballot if the ballot is overvoted or undervoted and allow the voter to make changes. Poll worker recruitment and training. Poll workers need implementation training. They need to be trained not only in how to assist voters to use the voting system, but also in how to use the technology for the tasks poll workers need to perform. These tasks can vary greatly from jurisdiction to jurisdiction. When more sophisticated voting systems are used at polling sites, jurisdictions may find it challenging to find poll workers with the skills to implement and use newer technologies. In 2001, we quoted one election official who said that “it is increasingly difficult to find folks to work for $6 an hour. We are relying on older retired persons— many who can’t/won’t keep up with changes in the technology or laws. Many of our workers are 70+.” Setting up the polls. Proper setup of polling places raises a number of implementation issues related to the people, processes, and technology involved. For DREs, the need for appropriate power outlets and possibly network connections limits the sites that can be used as polling places. In addition, setting up, initializing, and sometimes networking DRE units are technically challenging tasks. Technicians and vendor representatives may be needed to perform these tasks or to assist poll workers with them. In addition, with DREs, computer security issues come into play that are different from those associated with the paper and pencil tools that voters use in optical scan systems. Besides the units themselves, many DRE systems use cards or tokens that must be physically secured. With optical scan equipment, the ballots must be physically secured. Further, if precinct-based tabulation is used with an optical scan system, the tabulation equipment must be protected from tampering. Running the election. Many implementation issues associated with running the election are associated with the interaction of voters with the technology. Although both DREs and optical scan systems are based on technologies that most voters will have encountered before, general familiarity is not enough to avoid voter errors. With optical scan, voter errors are generally related to improperly marked ballots: the wrong marking device, stray marks, too many marks (overvotes), and so on. As described already, DRE equipment is designed to minimize voter error (by preventing overvotes, for example), but problems can also occur with this voting method. For example, many DREs require the voter to push a cast vote button to record the vote. However, some voters forget to push this button and leave the polling place without doing so. Similarly, after pressing the final cast vote button, voters cannot alter their votes. In some cases, this button may be pressed by mistake—for example, a small child being held by a parent may knock or kick the final vote button before the parent has completed the ballot. The technology is not the only factor determining the outcome in these situations, as different jurisdictions have different rules and processes concerning such problems. In 2001, we reported that when voters forgot to press the cast vote button, one jurisdiction required that an election official reach under the voting booth curtain and push the cast vote button without looking at the ballot. However, another jurisdiction required that an election official invalidate the ballot and reset the machine for a new voter. Counting the votes. Finally, implementation of the processes for counting votes is affected both by the technology used and by local requirements. With DREs, votes are collected within each unit. Some contain removable storage media that can be taken from the voting unit and transported to a central location to be tallied. Others can be configured to electronically transmit the vote totals from the polling place to a central tally location. As described earlier, optical scan systems also vary in the way votes are counted, depending on whether precinct-based or centralized tabulation equipment is used. For optical scan systems, officials follow state and local regulations and processes to determine whether and how to count ballots that cannot be read by the tabulation equipment. Counting such ballots may involve decisions on how to judge voter intent, which are also generally governed by state and local regulations and processes. In addition, depending on the type of voting technology used, ways to perform recounts may differ. For optical scan devices, recounts can be both automatic and manual; as in the original vote counting, officials make decisions on counting ballots that cannot be read by the tabulation equipment and on voter intent. With DREs there is no separate paper ballot or record of the voter’s intention, and therefore election officials rely on the information recorded in the machine’s memory: that is, permanent (read only) electronic images of each of the “marked” ballots. The assurance that these images are an accurate record of the vote depends on several things, including the proper implementation of the processes involved in designing, maintaining, setting up, and using the technology. In 2001, we identified four key challenges confronting local jurisdictions in effectively using and replacing voting technologies. These challenges are not dissimilar to those faced by any organization seeking to leverage modern technology to support mission operations. The first two challenges are particularly relevant in the near term, as jurisdictions look to position themselves for this year’s national elections. The latter two are more relevant to jurisdictions’ strategic acquisition and use of modern voting systems. Maximizing the performance of the voting systems that jurisdictions have and plan to use in November 2004 means taking proactive steps between now and then to best ensure that systems perform as intended. These steps include activities aimed at securing, testing, and maintaining these systems. We reported in 2001 that although the vast majority of jurisdictions performed security, testing, and maintenance activities in one form or another, the extent and nature of these activities varied among jurisdictions and depended on the availability of resources (financial and human capital) committed to them. The challenge facing all voting jurisdictions will be to ensure that these activities are fully and properly performed. As previously discussed in this testimony, jurisdictions need to manage the triad of people, processes, and technology as interrelated and interdependent parts of the total voting process. Given the amount of time that remains between now and the November 2004 elections, jurisdictions’ voting system performance is more likely to be influenced by improvements in poll worker system operation training, voter education about system use, and vote casting procedures than by changes to the systems themselves. The challenge for voting jurisdictions is thus to ensure that these people and process issues are dealt with effectively. Reliable measures and objective data are needed for jurisdictions to know whether the technology being used is meeting the needs of the user communities (both the voters and the officials who administer the elections). In 2001, we reported that the vast majority of jurisdictions were satisfied with the performance of their respective technologies in the November 2000 elections. However, this satisfaction was mostly based not on objective data measuring performance, but rather on the subjective impressions of election officials. Although these impressions should not be discounted, informed decisionmaking on voting technology investment requires more objective data. The challenge for jurisdictions is to define measures and begin collecting data so that they can definitely know how their systems are performing. Jurisdictions must be able to ensure that the technology will provide benefits over its useful life that are commensurate with life-cycle costs (acquisition as well as operations and maintenance) and that these collective costs are affordable and sustainable. In 2001, we reported that the technology type and configuration that jurisdictions employed varied depending on each jurisdiction’s unique circumstances, such as size and resource constraints, and that reliable data on life-cycle costs and benefits were not available. The challenge for jurisdictions is to view and treat voting systems as capital investments and to manage them as such, including basing decisions on technology investments on reliable analyses of quantitative and qualitative return on investment. In closing, I would like to say again that electronic voting systems are an undeniably critical link in the overall election chain. While this link alone cannot make an election, it can break one. The concerns being surfaced by electronic voting system experts and others highlight the potential for problems in the upcoming 2004 national elections if the challenges that we cited in 2001 and reiterate in this testimony are not effectively addressed. Although the EAC only recently began operations and is not yet at full strength, it has no choice but to hit the ground running to ensure that jurisdictions and voters are educated and well-informed about the proper implementation and use of electronic voting systems, and to ensure that jurisdictions take the appropriate steps—related to people, process, and technology—that are needed regarding security, testing, and maintenance. More strategically, the EAC needs to consider strengthening the voluntary voting system guidelines and the testing associated with enforcing these guidelines. Critical to the commission’s ability to do this will be the adequacy of resources at its disposal and the degree of cooperation it receives from entities at all levels of government. Mr. Chairman, this concludes my statement. I would be pleased to answer any questions that you or other Members of the Subcommittee may have at this time. For further information, please contact Randolph C. Hite at (202) 512-6256 or by e-mail at [email protected]. Other key contributors to this testimony were Barbara S. Collier, Richard B. Hung, John M. Ortiz, Jr., Maria J. Santos, and Linda R. Watson. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The technology used to cast and count votes is one aspect of the multifaceted U.S. election process. GAO examined voting technology, among other things, in a series of reports that it issued in 2001 following the problems encountered in the 2000 election. In October 2002, the Congress enacted the Help America Vote Act, which, among other things, established the Election Assistance Commission (EAC) to assist in the administration of federal elections. The act also established a program to provide funds to states to replace older punch card and lever machine voting equipment. As this older voting equipment has been replaced with newer electronic voting systems over the last 2 years, concerns have been raised about the vulnerabilities associated with certain electronic voting systems. Among other things, GAO's testimony focuses on attributes on which electronic voting systems can be assessed, as well as design and implementation factors affecting their performance. GAO also describes the immediate and longer term challenges confronting local jurisdictions in using any type of voting equipment, particularly electronic voting systems. An electronic voting system, like other automated information systems, can be judged on several bases, including how well its design provides for security, accuracy, ease of use, and efficiency, as well as its cost. For example, direct recording electronic systems offer advantages in ease of use because they can have features that accommodate voters with various disabilities, and they protect against common voter errors, such as overvoting (voting for more candidates than is permissible); a disadvantage of such systems is their capital cost and frequent lack of an independent paper audit trail. Advantages of optical scan voting equipment (another type of electronic voting system) include capital cost and the enhanced security associated with having a paper audit trail; disadvantages include lower ease of use, such as their limited ability to accommodate voters with disabilities. One important determinant of voting system performance is how it is designed and developed, including the testing that determines whether the developed system performs as designed. In the design and development process, a critical factor is the quality of the specified system requirements as embodied in applicable standards or guidance. For voting technology, these voluntary standards have historically been problematic; the EAC has now been given responsibility for voting system guidelines, and it intends to update them. The EAC also intends to strengthen the process for testing voting system hardware and software. A second determinant of performance is how the system is implemented. In implementing a system, it is critical to have people with the requisite knowledge and skills to operate it according to well-defined and understood processes. The EAC also intends to focus on these people and process factors in its role of assisting in the administration of elections. In the upcoming 2004 national election and beyond, the challenges confronting local jurisdictions in using electronic voting systems are similar to those facing any technology user. These challenges include both immediate and more long term challenges.
Our prior work highlights some of the challenges VA faces in formulating its budget: obtaining sufficient data for useful budget projections, making accurate calculations, and making realistic assumptions. Our 2006 report on VA’s overall health care budget found that VA underestimated the cost of serving veterans returning from military operations in Afghanistan and Iraq, in part because estimates for fiscal year 2005 were based on data that largely predated the Iraq conflict. In fiscal year 2006, according to VA, the agency again underestimated the cost of serving these veterans because it did not have sufficient data due to challenges obtaining data needed to identify these veterans from the Department of Defense (DOD). According to VA officials, the agency subsequently began receiving the DOD data needed to identify these veterans on a monthly basis rather than quarterly. We also reported challenges VA faces in making accurate calculations during budget formulation. VA made computation errors when estimating the effect of its proposed fiscal year 2006 nursing home policy, and this also contributed to requests for supplemental funding. We found that VA underestimated workload—that is, the amount of care VA provides—and the costs of providing care in all three of its nursing home settings. VA officials said that the errors resulted from calculations being made in haste during the OMB appeal process, and that a more standardized approach to long-term care calculations could provide stronger quality assurance to help prevent future mistakes. In 2006, we recommended that VA strengthen its internal controls to better ensure the accuracy of calculations it uses in preparing budget requests. VA agreed with and implemented this recommendation for its fiscal year 2009 budget justification by having an independent actuarial firm validate the savings estimates from proposals to increase fees for certain types of health care coverage. Our 2006 report on VA’s overall health care budget also illustrated that VA faces challenges making realistic assumptions about the budgetary impact of its proposed policies. VA made unrealistic assumptions about how quickly the department would realize savings from proposed changes in its nursing home policy. We reported the President’s requests for additional funding for VA’s medical programs for fiscal years 2005 and 2006 were in part due to these unrealistic assumptions. We recommended that VA improve its budget formulation processes by explaining in its budget justifications the relationship between the implementation of proposed policy changes and the expected timing of cost savings to be achieved. VA agreed and acted on this recommendation in its fiscal year 2009 budget justification. In January 2009, we found that VA’s spending estimate in its fiscal year 2009 budget justification for noninstitutional long-term care services appeared unreliable, in part because this spending estimate was based on a workload projection that appeared to be unrealistically high in relation to recent VA experience. VA projected that its workload for noninstitutional long-term care would increase 38 percent from fiscal year 2008 to fiscal year 2009. VA made this projection even though from fiscal year 2006 to fiscal year 2007—the most recent year for which workload data are available—actual workload for these services decreased about 5 percent. In its fiscal year 2009 budget justification, VA did not provide information regarding its plans for how it would increase noninstitutional workload 38 percent from fiscal year 2008 to fiscal year 2009. We recommended that VA use workload projections in future budget justifications that are consistent with VA’s recent experience with noninstitutional long-term care spending or report the rationale for using alternative projections. In its March 23, 2009, letter to GAO, VA stated it concurs with this recommendation and will implement our recommendation in future budget submissions. In January 2009, we also reported that VA may have underestimated its nursing home spending and noninstitutional long-term care spending for fiscal year 2009 because it used a cost assumption that appeared unrealistically low, given recent VA experience and economic forecasts of health care cost increases. For example, VA based its nursing home spending estimate on an assumption that the cost of providing a day of nursing home care would increase 2.5 percent from fiscal year 2008 to fiscal year 2009. However, from fiscal year 2006 to fiscal year 2007—the most recent year for which actual cost data are available—these costs increased approximately 5.5 percent. VA’s 2.5 percent cost-increase estimate is also less than the 3.8 percent inflation rate for medical services that OMB provided in guidance to VA to help with its budget estimates. We recommended that in future budget justifications, VA use cost assumptions for estimating both nursing home and noninstitutional long- term care spending that are consistent with VA’s recent experience or report the rationale for alternative cost assumptions. In its March 23, 2009, letter to GAO, VA stated it concurs with our recommendations and will implement these recommendations in future budget submissions. Consideration of any proposal to change the availability of the appropriations VA receives for health care should take into account the current structure of the federal budget, the congressional budget process—including budget enforcement—and the nature of the nation’s fiscal challenge. The impact of any change on congressional flexibility and oversight also should be considered. In the federal budget, spending is divided into two main categories: (1) direct spending, or spending that flows directly from authorizing legislation—this spending is often referred to as “mandatory spending”— and (2) discretionary spending, defined as spending that is provided in appropriations acts. It is in the annual appropriations process that the Congress considers, debates, and makes decisions about the competing claims for federal resources. Citizens look to the federal government for action in a wide range of areas. Congress is confronted every year with claims that have merit but which in total exceed the amount the Congress believes appropriate to spend. It is not an easy process—but it is an important exercise of its Constitutional power of the purse. Special treatment for spending in one area—either through separate spending caps or guaranteed minimums or exemption from budget enforcement rules—may serve to protect that area from competition with other areas for finite resources. The allocation of funds across federal activities is not the only thing Congress determines as part of the annual appropriations process. It also specifies the purposes for which funds may be used and the length of time for which funds are available. Further, annually enacted appropriations have long been a basic means of exerting and enforcing congressional policy. The review of agency funding requests often provides the context for the conduct of oversight. For example, in the annual review of the VA health care budget, increasing costs may prompt discussion about causes and possible responses—and lead to changes in the programs or in funding levels. VA health care offers illustrations of and insights into growing health care costs. This takes on special significance since—as we and others have reported—the nation’s long-term fiscal challenge is driven largely by the rapid growth in health care costs. Both the Congress and the agencies have expressed frustration with the budget and appropriations process. Some members of Congress have said the process is too lengthy. The public often finds the debate confusing. Agencies find it burdensome and time consuming. And the frequent need for continuing resolutions (CR) has been a source of frustration both in the Congress and in agencies. Although there is frustration with the current process, changes should be considered carefully. The current process is, in part, the cumulative result of many changes made to address previous problems. This argues for spending time both defining what the problem(s) to be solved are and analyzing the impact of any proposed change(s). In considering issues surrounding the possibility of providing advance appropriations for VA health care—or any other program—it is important to recognize that not all funds provided through the existing appropriations process expire at the end of a single fiscal year. Congress routinely provides multi-year appropriations for accounts or projects within accounts when it deems it makes sense to do so. Multi-year funds are funds provided in one year that are available for obligation beyond the end of that fiscal year. So, for example, multi-year funds provided in the fiscal year 2010 appropriations act would be available in fiscal year 2010 and remain available for some specified number of future years. Unobligated balances from such multi-year funds may be carried over by the agency into the next fiscal year—regardless of whether the agency is operating under a continuing resolution or a new appropriations act. For example, in fiscal year 2009 about $3 billion of approximately $41 billion for VA health care programs was made available for two years. Congress also provides agencies—including VA—some authority to move funds between appropriations accounts. This transfer authority provides flexibility to respond to changing circumstances. Advance appropriations are different from multi-year appropriations. Whereas multi-year appropriations are available in the year in which they are provided, advance appropriations represent budget authority that becomes available one or more fiscal years after the fiscal year covered by the appropriations act in which they are provided. So, for example, advance appropriations provided in the fiscal year 2010 appropriations act would consist of funds that would first be available for obligation in fiscal year 2011 or later. In considering the proposal to provide advance appropriations, one issue is the impact on congressional flexibility and its ability to consider competing demands for limited federal funds. Although appropriations are made on an annual cycle, both the President and the Congress look beyond a single year in setting spending targets. The current administration’s budget presents spending totals for ten fiscal years. The concurrent Budget Resolution—which represents Congress’s overall fiscal plan—includes discretionary spending totals for the budget year and each of the four future years. The provision of advance appropriations would “use up” discretionary budget authority for the next year. In doing so it limits Congress’s flexibility to respond to changing priorities and needs and reduces the amount available for other purposes in the next year. Another issue would be how and when the limits on such advance appropriations would be set. Currently the concurrent Budget Resolution both caps the total amount that can be provided through advance appropriations and identifies the agencies or programs which may be provided such funding. It does not specify how the total should be allocated among those agencies. A related question is what share of VA health care funding would be provided in advance appropriations. Is the intent to provide a full appropriation for both years in the single appropriations act? This would in effect enact the entire appropriation for both the budget year and the following fiscal year at the same time. If appropriations for VA health care were enacted in two-year increments, under what conditions would there be changes in funding in the second year? Would the presumption be that there would be no action in that second year except under unusual circumstances? Or is the presumption that there would be additional funds provided? These questions become critical if Congress decides to provide all or most of VA health care’s funding in advance. Even if only a portion of VA health care funding is to be provided in advance appropriations, Congress will need to determine what that share should be and how it should be allocated across VA’s medical accounts. While providing funds for 2 years in a single appropriations act provides certainty about some funds, the longer projection period increases the uncertainty of the data and projections used. Under the current annual appropriations cycle, agencies begin budget formulation at least 18 months before the relevant fiscal year begins. If VA is expected to submit its budget proposal for health care for both years at once, the lead time for the second year would be 30 months. This additional lead time increases the uncertainty of the estimates and could worsen the challenges VA faces when formulating its health care budget. Given the challenges VA faces in formulating its health care budget and the changing nature of health care, proposals to change the availability of the appropriations it receives deserve careful scrutiny. Providing advance appropriations will not mitigate or solve the problems noted above regarding data, calculations, or assumptions in developing VA’s health care budget. Nor will it address any link between cost growth and program design. Congressional oversight will continue to be critical. No one would suggest that the current budget and appropriations process is perfect. However, it is important to recognize that no process will make the difficult choices and tradeoffs Congress faces easy. If VA is to receive advance appropriations for health care, the amount of discretionary spending available for Congress to allocate to other federal activities in that year will be reduced. In addition, providing advance appropriations for VA health care will not resolve the problems we have identified in VA’s budget formulation. Mr. Chairman, this concludes our prepared remarks. We would be happy to answer any questions you or other members of the Committee may have. For more information regarding this testimony, please contact Randall B.Williamson at (202) 512-7114 or [email protected] or Susan J. Irving at (202) 512-8288 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. In addition to the contributors named above, Carol Henn and James C. Musselwhite, Assistant Directors; Katherine L. Amoroso, Helen Desaulniers, Felicia M. Lopez, Julie Matta, Lisa Motley, Sheila Rajabiun, Steve Robblee, and Timothy Walker made key contributions to this testimony. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Department of Veterans Affairs (VA) estimates it will provide health care to 5.8 million patients with appropriations of about $41 billion in fiscal year 2009. It provides a range of services, including primary care, outpatient and inpatient services, long-term care, and prescription drugs. VA formulates its health care budget by developing annual estimates of its likely spending for all its health care programs and services, and includes these estimates in its annual congressional budget justification. GAO was asked to discuss budgeting for VA health care. As agreed, this statement addresses (1) challenges VA faces in formulating its health care budget and (2) issues surrounding the possibility of providing advance appropriations for VA health care. This testimony is based on prior GAO work, including VA Health Care: Budget Formulation and Reporting on Budget Execution Need Improvement (GAO-06-958) (Sept. 2006); VA Health Care: Long-Term Care Strategic Planning and Budgeting Need Improvement (GAO-09-145) (Jan. 2009); and VA Health Care: Challenges in Budget Formulation and Execution (GAO-09-459T) (Mar. 2009); and on GAO reviews of budgets, budget resolutions, and related legislative documents. We discussed the contents of this statement with VA officials. GAO's prior work highlights some of the challenges VA faces in formulating its budget: obtaining sufficient data for useful budget projections, making accurate calculations, and making realistic assumptions. For example, GAO's 2006 report on VA's overall health care budget found that VA underestimated the cost of serving veterans returning from military operations in Iraq and Afghanistan. According to VA officials, the agency did not have sufficient data from the Department of Defense, but VA subsequently began receiving the needed data monthly rather than quarterly. In addition, VA made calculation errors when estimating the effect of its proposed fiscal year 2006 nursing home policy, and this contributed to requests for supplemental funding. GAO recommended that VA strengthen its internal controls to better ensure the accuracy of calculations used to prepare budget requests. VA agreed and, for its fiscal year 2009 budget justification, had an independent actuarial firm validate savings estimates from proposals to increase fees for certain types of health care coverage. In January 2009, GAO found that VA's assumptions about the cost of providing long-term care appeared unreliable given that assumed cost increases were lower than VA's recent spending experience and guidance provided by the Office of Management and Budget. GAO recommended that VA use assumptions consistent with recent experience or report the rationale for alternative cost assumptions. In a March 23, 2009, letter to GAO, VA stated that it concurred and would implement this recommendation for future budget submissions. The provision of advance appropriations would "use up" discretionary budget authority for the next year and so limit Congress's flexibility to respond to changing priorities and needs. While providing funds for 2 years in a single appropriations act provides certainty about some funds, the longer projection period increases the uncertainty of the data and projections used. If VA is expected to submit its budget proposal for health care for 2 years, the lead time for the second year would be 30 months. This additional lead time increases the uncertainty of the estimates and could worsen the challenges VA already faces when formulating its health care budget. Given the challenges VA faces in formulating its health care budget and the changing nature of health care, proposals to change the availability of the appropriations it receives deserve careful scrutiny. Providing advance appropriations will not mitigate or solve the problems we have reported regarding data, calculations, or assumptions in developing VA's health care budget. Nor will it address any link between cost growth and program design. Congressional oversight will continue to be critical.
The majority of the federal civilian workforce obtained their positions by competing against others under the government’s merit system selection process. However, there are provisions for noncompetitive appointments as well. Included among these are the following: Presidential, noncareer SES, and Schedule C appointees are appointed by an administration to support and advocate the president’s goals and policies. Noncareer SES appointees can receive noncompetitive appointments to SES positions that normally involve advocating, formulating, and directing the programs and policies of the administration. Schedule C appointees generally receive noncompetitive appointments to excepted service positions graded GS-15 and below that involve determining policy or that require a close, confidential relationship with the agency head or other key officials of the agency. These appointees serve at the pleasure of the President or agency head. Certain congressional employees are eligible to apply for noncompetitive, career appointments under the Ramspeck Act. Eligibility requirements include, among other things, that employees must have been separated from this employment involuntarily, such as when a Member retires, and must be appointed to a career position within 1 year of separation. Employees appointed under this authority must meet applicable qualification requirements for the positions to which they are appointed. Under the Foreign Assistance Act of 1961, as amended, certain agencies can noncompetitively appoint individuals to what are labeled administratively determined pay rate appointments in which (1) individuals appointed under this authority serve at the pleasure of the agency head and can be removed upon notice and (2) the salary levels can be determined by the agency head. Limited term SES appointments are time-limited, nonrenewable appointments for up to 3 years. These appointments can be made noncompetitively. Limited emergency SES appointments are also nonrenewable. They are for time-limited positions for up to 18 months that are required to meet an urgent program need. Limited emergency SES appointments also can be made noncompetitively, and the appointees serve at the pleasure of the agency head. Since such appointments are often tied to the administration in power and, with the exception of the Ramspeck Act appointments, are not permanent, such individuals sometimes seek a permanent, career appointment in the government. Career appointments in government are usually made through competitive procedures, consistent with the government’s merit system selection principles, in which the selection is determined on the basis of relative knowledge, skills, and ability, after fair and open competition that ensures that all applicants receive equal opportunity.When a political appointee seeks a career appointment, concerns can arise as to whether these merit principles will be followed. These concerns may occur because the appointee competing for the career appointment is often well known or “connected” in the agency or department, sometimes having worked for the political appointee who should nominate the best qualified candidate to the selecting official or for the official who will do the selecting. We have written a number of reports on the issue of former political appointees and former legislative branch employees receiving career appointments in the executive branch. As in this report, we generally found that agencies usually followed appropriate procedures in making these career appointments. However, we also found a few cases in which the circumstances appeared to have provided the appointee with an advantage. See Related GAO Products for a listing of our past reports. In order to determine whether agencies used appropriate authorities and followed procedures in providing career appointments to former political appointees and legislative branch employees during the period January 1, 1996, through March 31, 1997, we first identified such cases. We did this by asking 50 executive branch agencies, including all cabinet-level departments, to complete and return to us a data collection instrument (DCI) for each case in which they had provided (1) a career appointment to a former political appointee or (2) a career appointment to a former legislative branch employee using Ramspeck Act authority. The DCI provided reporting instructions and defined former political appointees and legislative branch employees for purposes of our review. It was also used to collect details of each of the career appointments, including the appointee’s name, employing agency, date of career appointment, title of position, and grade level. It also collected details about each of the political or legislative branch appointments, including the type and date of the political appointment, title of position, and employing agency. In addition, we asked each of the 50 agencies to send us negative reports for each month in which they did not make such career appointments. A copy of the DCI we used for this review is contained in appendix II. The 50 agencies and departments were selected using criteria developed in concert with your offices. The selection criteria and agencies are identified in appendix III. As agreed with your offices, we conducted detailed reviews of the authorities used and procedures followed in those cases in which career appointments reported to us were made at the GS-13 level and higher. To determine whether the appropriate appointing authority was used, we first identified the authority that the agency cited for the appointment. The agency must cite this authority in Standard Form 50B-Notification of Personnel Action (SF-50B), a copy of which is filed in the appointee’s official personnel folder (OPF). We then researched the cited authority in law and/or regulation to determine the criteria the agency had to meet in order to use the authority. We then examined the contents of the employee’s OPF and, when appropriate, the merit staffing case file to determine if there was evidence that the criteria for using the authority were met. To determine whether proper procedures were followed in the 36 cases, we examined the steps taken in the application and appointment process. With guidance and assistance from a GAO personnel specialist, we examined OPFs and merit staffing case files to determine what procedures the agencies used. In cases where we had questions, we also interviewed officials from the personnel offices of the appointing agency or other officials knowledgeable about the specific case. We then compared the procedures used in the appointment process to the federal personnel laws and regulations contained in the U.S. Code and the Code of Federal Regulations and to the department’s or agency’s merit staffing plans, as appropriate. We did not independently determine whether the 36 employees were qualified for the positions to which they were appointed. There was no specific set of criteria that we could apply to determine if any of the appointments appeared to involve favoritism or preferential treatment. Consequently, we applied our professional judgment after reviewing the circumstances of each case. For example, to assess whether a vacancy announcement might have been tailored to the work experiences of the appointee, we examined information contained in the employee’s application materials and excepted service position descriptions regarding work experiences and dates and responsibilities and compared that information to the information contained in the vacancy announcement. We were aided in this appraisal of the circumstances by the knowledge gained from past work on the subject; the technical assistance provided by a GAO personnel specialist; and by our internal review process, which included the examination of the six questionable cases by attorneys experienced in the application of federal personnel law. In addition, we gave draft summaries of the six cases to the respective agencies that made the appointments and asked them to provide any corrections, clarifications, or explanations that they believed were appropriate to our understanding of the circumstances. We incorporated their clarifications to the case summaries as appropriate. All together, 20 of the 50 agencies reported to us that they had made 47 career appointments of (1) former political appointees or (2) former legislative branch employees under authority provided by the Ramspeck Act. We did not verify that the 50 agencies identified and reported to us all reportable appointments. Of the 47 appointments reported to us, 36 were made at the GS-13 level, or higher, by 18 agencies. Appendix IV provides a list of the 18 agencies where the 36 appointments were made. We did our work in Washington, D.C., from April 1996 through July 1997 in accordance with generally accepted government auditing standards. Because OPM is responsible for overseeing the federal personnel system, we obtained written comments on a draft of this report from OPM. These comments are discussed at the end of this letter and are reprinted in appendix V. Agencies must cite the legal authority under which they are appointing an individual in the documentation they prepare to make an appointment. Each appointment authority generally covers a particular set of circumstances and includes requirements or criteria the agencies must meet in order to use the authority. All together, 7 different appointment authorities, such as the Ramspeck Act of 1940, were cited for the 36 appointments. (The 7 authorities, their criteria, and the distribution of the 36 appointments among the 7 authorities are shown in app. VI.) From our review of the various documents that were related to the appointments (such as vacancy announcements, resumes, and official notifications of personnel actions) and our discussions with pertinent agency officials, we determined that the agencies met the requirements of the 7 appointment authorities and that they used the authorities properly in making the 36 appointments. We did note, however, that in 3 of the 36 appointments, although the appropriate appointment authorities were used, the reference citations on the effecting documents were incorrect. For example, in one case, the appointment authority cited was the vacancy announcement number rather than the applicable section of the U.S. Code entry under which authority the appointment had been made. Personnel officials from the employing agencies stated that the incorrect citations were due to administrative error and that corrections would be made. The three appointments did not involve circumstances that, in our opinion, could give the appearance of favoritism or preferential treatment. The merit staffing procedures agencies are to follow in making appointments are set out in federal personnel law and regulations and by the agencies in their merit staffing plans, which detail their procedures for filling positions. The procedures are intended to foster the principles of fair and open competition and equal opportunity. For example, to fill a position, an agency may be required to (1) publish a vacancy announcement so that the position’s availability is made known to possible applicants; and (2) have all applications rated and ranked by a several-member panel, with the assignment of members to the panel and the scoring of applications to be accomplished in accordance with the related merit staffing plan. For the 36 cases, we compared the procedures called for in law, regulation, and merit staffing plans, as appropriate, with the procedures that were evident in the appointment documentation. On the basis of these comparisons, it appeared that the agencies followed proper procedures in making the 36 appointments. However, as we pointed out in a previous report, like any other system, the appointment process can be manipulated. Processes and procedures such as advertising the positions may be followed, and the appearance of fair and open competition may be achieved. Ultimately, however, the question of whether fair and open competition actually occurred or whether a candidate was preselected for appointment or given some other advantage rests with the intent and motivation of the agency officials involved—factors that cannot be controlled by regulation and that we could not determine from review of files or discussions with agency officials. Although records in OPFs and merit staffing files indicated that agencies used proper appointing authorities and procedures for all 36 appointments, in our opinion, 6 appointments involved circumstances that could lead to the appearance that the individuals received favoritism or preferences that enhanced their prospects for the appointments. The remaining 30 appointments did not raise comparable questions of the appearance of favoritism or preference. The circumstances in these six cases are summarized below. In two cases, the required duties, knowledge, skills, or abilities listed in the vacancy announcements appeared to have been tailored to the work experiences of the political appointees who applied for and were appointed to the respective positions. In one of these cases, the vacancy announcement contained several requirements that closely matched the specific work experiences of the political appointee who obtained the position. One of those requirements, for example, was that applicants should have experience working with particular congressional committees. The only applicant who had that experience was the political appointee, who had worked for one of the committees prior to obtaining his political appointment. In the other case, the vacancy announcement contained several requirements that closely matched the position description for the job the political appointee had previously held at that agency. Agency personnel officials with whom we discussed these two cases defended the agencies’ prerogative in determining what requirements were necessary for the positions. They also said that situations in which vacancy announcements may appear to be tailored to a particular individual are not unusual. In another two cases, political appointees obtained career appointments to positions from which they were reassigned shortly after receiving their appointments, thus raising questions about whether there was a bona fide need to fill the positions. In one of these cases, a political appointee obtained a career appointment to a position from which—on the same day of his career appointment—he was reassigned to a second position. In the second of these two cases, a political appointee responsible for the agency’s administrative operations—including human resource management—initiated the process to fill, through a career appointment, an executive level position at a component agency. A vacancy announcement was published, and the political appointee applied and was selected for the position. According to a high ranking human resource management official at the parent agency, the need to fill the position was questionable because, among other things, the agency in which the position was located had a strong administration and did not need another executive position. After about 2 months, the newly appointed “career” employee was reassigned to another position. In the fifth case, a political appointee who worked directly for the head of the agency helped create a new executive position that was to be filled through a career appointment. The political appointee applied and was selected by the head of the agency for the position. High ranking agency officials told us that they were surprised that the political appointee applied and was selected for the position because of the potential negative perceptions that the public may have acquired in this case. Nevertheless, the agency officials advised us that political appointees are not prohibited from applying or being selected for career appointments in the government; in this case, they believed the individual was the most qualified applicant for the position. Finally, the sixth case involved a political appointee who applied and was selected for an executive position after the position was announced a third time. Applicants from the first two announcements were rated together by a screening panel, of which the political appointee was a member. Five applicants were identified as being best qualified for the position, and one of them was offered the position but declined. The position was then reannounced for the third time, and the political appointee applied and was selected for the position. According to documentation contained in the merit staffing file, the position was reannounced because too much time—3 months—had passed since the closing of the original announcement, and it was decided that the search for candidates should be broadened. We noted that recruitment under the first two vacancy announcements had been limited to current civil service employees of the federal government. The recruitment area was expanded to qualified applicants from within and outside the federal government in the third announcement. The political appointee who obtained the position had the kind of experience that the position required. However, the unanswerable question is whether the agency reissued the announcement in order to enable the political appointee to apply, even though one of the best-qualified candidates from the earlier announcements could have been selected. We believe that the circumstances surrounding each of the six cases could create a perception of preferential treatment or favoritism toward a particular applicant, despite the use of proper hiring authorities and merit staffing procedures. The appearance of preferential treatment or favoritism can obviously compromise the integrity of the merit staffing system. However, a determination of whether preferential treatment or favoritism actually occurred could be made only if the intent and motivation of the agency officials involved were known. The Director of OPM provided written comments on a draft of this report in a letter dated July 29, 1997. (See app. V.) The Director expressed concern about our finding that circumstances surrounding six appointments could give the appearance of favoritism or preferential treatment. He noted that there is a difference between “could” and “did,” and in OPM’s reading of the draft report, there is no basis to conclude that favoritism or preferential treatment did actually occur. He was concerned with the use of the word “could” because, he said, it implies activity that cannot be proven, while leaving the impression of wrongdoing. He said that since we were unable to discern the intent of the agency officials involved in the six appointments, it would be inappropriate to conclude that any prohibited activities occurred. In the absence of evidence to the contrary, he believed that agencies must be given the benefit of the doubt in assessing whether they exercised proper judgment in their appointments. We agree that such a conclusion would be inappropriate. As we point out in the report, an ultimate determination of whether favoritism or preferential treatment actually occurred could be made only if the intent or motivation of the involved agency officials are known—something that we could not determine from review of agency files or discussions with agency officials. For this reason, we characterized the circumstances as those that, in our opinion, could lead to the appearance of favoritism or preferential treatment. We believe this is a valid representation of the circumstances surrounding the six appointments, but we recognize that others could have a different opinion. Just as we reported that the agencies used appropriate appointment authorities and followed proper appointment procedures, we would be remiss in not reporting the existence of the circumstances surrounding the six cases. The Director pointed out in his letter that limited term and limited emergency SES appointments are not considered by OPM to be political appointments. We recognize that OPM has not traditionally recognized such appointments as being political appointments. Among other things, however, they share certain characteristics with the noncareer SES political appointments. For example, limited term SES appointments can be made noncompetitively and appointees serve at the pleasure of the agency head. On the basis of discussions with your offices, and as pointed out in footnote 1 of this report, we treated both limited term and limited emergency SES appointments for purposes of this assignment as political appointments when the incumbents of those positions subsequently obtained career appointments. The Director also clarified the use of a specific SES appointment authority that focuses on the technical qualifications of an SES career appointment candidate deemed to offset the lack of some of the general managerial qualifications. We incorporated this clarification in our description in appendix I of the case involving the career appointment of a Department of Energy employee. One of the other cases involved an OPM appointment, and the Director provided clarification of the role his former Chief of Staff played in the creation of the position to which he (the former Chief of Staff) obtained a career appointment. Based on this clarification, we augmented our description of this case to include language intended to more clearly describe the former Chief of Staff’s role in creating the position. In clarifying the role, the Director noted our concern that the appointment may have negatively affected other agencies’ views toward OPM as the lead organization for ensuring that agencies follow merit system principles. The Director said he considers oversight and protection of the merit system to have been the core function of OPM during his tenure. As agreed with your Committees, unless you publicly announce this report’s contents earlier, we plan no further distribution of it until 10 days after the date of this letter. We will then send copies to the Ranking Minority Members of your Committee and Subcommittee, the Chairmen and Ranking Minority Members of the Senate Governmental Affairs and House Government Reform and Oversight Committees, other appropriate congressional committees, the Director of OPM, the heads of other agencies where we did our work, and other interested parties. We will also make copies available to others on request. Major contributors to this report were Richard W. Caradine, Assistant Director; N. Scott Einhorn, Evaluator-in-Charge; Anthony Assia, Evaluator; Carolyn L. Samuels, Evaluator; and Stephen J. Kenealy, Technical Advisor. Please contact me at (202) 512-9039 if you have any questions. In May 1994, an individual was noncompetitively appointed by the Department of Defense (DOD) to an excepted service, Schedule C, position at the General Schedule (GS) 14 level. Prior to obtaining this position, the individual had worked for approximately 5 years on the U.S. House of Representatives Committee on Small Business. In March 1996, the individual obtained a career appointment to a competitive service position at DOD. Results from our examination of the case indicated that the vacancy announcement for the competitive service position appeared to have been tailored to the work experience of the individual appointed. The announcement contained work experience requirements that closely matched the specific work experiences of the individual, including “detailed knowledge of, and experience with, the Congressional legislative process, particularly in the Small Business Committees.” The May 1994 excepted service appointment was to a temporary Schedule C, GS-14 Staff Specialist position for which the appointment was not to exceed September 11, 1994. On July 24, 1994, the individual was converted from the temporary appointment to a permanent excepted service appointment as a Schedule C Staff Specialist. The March 1996 career appointment was to a Program Analyst position that was initially advertised in September 1995 and subsequently readvertised in November 1995. The initial vacancy announcement had limited the area of consideration to “Current Status Department of Defense Employees, Eligible Disabled, and 30% Disabled Veterans.” According to DOD personnel officials, this area of consideration restricted competition to only those candidates who (1) had competitive service status and were already employed by DOD or (2) were eligible disabled veterans. We noted that the individual who was selected for the position had not acquired competitive service status and, under the area of consideration specified in the initial advertisement, would not have been eligible to apply or be considered for the position. The area of consideration in the November 1995 vacancy announcement was changed to “All Sources.” According to DOD personnel officials, the All Sources denotation meant that the position was open to competition among all candidates, including those who did not have competitive service status in the federal government, such as the individual who was selected for the position. However, the qualifications of such applicants would first have to be reviewed by the Office of Personnel Management (OPM) in order to (1) certify the individual’s eligibility for the position and (2) rate and rank the applicants against others lacking competitive service status who were seeking the position. According to DOD personnel officials, the original intention of DOD managers was to announce the position to sources both within and outside the government, and so the restricted area of consideration in the original announcement was a clerical error. Both vacancy announcements contained several duties and assessment factors that appeared to be tailored to the work experiences of the individual. For example, one of the duties listed was to serve as the manager of the Small Business Innovation Research program and related small business research programs. According to information contained in the application materials of the individual, he had been serving as the acting manager of the Small Business Innovation Research and Small Business Technology Transfer programs. Of the five assessment factors listed in the amended vacancy announcement, one of them required detailed knowledge of the statutes and operations of the Small Business Innovation Research program; one required knowledge of and experience with the congressional legislative process, particularly in the Small Business Committees; and one required thorough knowledge and understanding of the academic literature bearing on technology policy and management. The first two matched the individual’s work experience as claimed on his application materials. The third assessment factor also matched information cited on the individual’s application materials in which he listed six published articles on technology policy that had appeared in such publications as The Economist and Science. Additionally, the individual stated that he had coauthored a publication entitled Dual Use Technology: A Defense Strategy for Affordable, Leading Edge Technology. The other two assessment factors required general knowledge and understanding of innovative solutions to complex problems and familiarity with the management of “RDT&E” within DOD, qualifications that were not specifically addressed in the individual’s application materials. Documents contained in DOD’s files indicated that 42 persons, including the individual, applied for the competitive service position. Nineteen of the 42, including the individual, were determined to be among the best qualified. DOD tentatively selected the individual, then asked OPM to determine whether the individual would be within reach on an OPM certificate of eligibles. OPM determined that the individual was eligible for the position and sent DOD a certificate of eligibles containing only the name of the individual. It was from this certificate that the individual was officially selected. In March 1994, an individual was noncompetitively appointed by the U.S. Agency for International Development (AID) to an excepted service, administratively determined pay rate position equivalent to the GS-15 level. According to documents contained in the employee’s official personnel folder (OPF), prior to obtaining this position, the individual had worked for approximately 2 years as a congressional staff member. In April 1996, the individual was selected for a career appointment to a competitive service position at AID. Results from our examination of the case indicated that the vacancy announcement for the competitive service position appeared to be tailored to the work experience of the individual. The announcement contained work experience requirements that closely matched the specific work experiences of the individual, including knowledge and understanding of the legislative authorization and appropriations process. Because the individual did not have competitive service status in the federal government, OPM had to review the individual’s qualifications in order to (1) certify the individual’s eligibility for the position; and (2) rate and rank the individual against other qualified, nonstatus applicants who were seeking the position. For this reason, the close matching of the experience requirements in the vacancy announcement to the work experience of the individual could have affected the outcome of OPM’s review. The March 1994 excepted service appointment was to a Program Manager position equivalent to the GS-15, step 9, salary level. The authority used for this noncompetitive appointment was provided by section 625(b) of the Foreign Assistance Act of 1961, as amended. According to an AID personnel official, such appointments are labeled by AID as administratively determined (AD) pay rate appointments in which (1) individuals appointed under this authority serve at the pleasure of the AID Administrator and can be removed upon notice, and (2) the salary levels can be determined by the AID Administrator. The April 1996 career appointment was to a GS-14 Program Manager position and resulted from a competitive selection process in which the vacancy was announced to the public, applications were received and screened, the best-qualified applicants were identified, qualifications of best qualified nonstatus applicants were reviewed by OPM, certificates of eligibles for selection were prepared, and the individual was selected. Our examination of the case indicated that AID appeared to have followed proper procedures in the competitive selection process. Even so, certain factors about the vacancy announcement may have enhanced the individual’s prospects of being found to be among the best qualified and eligible for selection. The vacancy announcement for the competitive service position indicated that both status and nonstatus applicants could apply. Therefore, the individual—who did not have status—was eligible to apply for the position. According to an AID personnel official, many of AID’s positions are highly technical in nature and therefore potentially qualified applicants are limited. As a result, vacancy announcements for such positions are frequently opened to all sources, including nonstatus applicants. In this case, however, the position did not appear to be highly technical. The AID personnel official indicated that AID management has the prerogative to announce vacant positions as being open to both status and nonstatus applicants in order to attract the best qualified applicants, regardless of their competitive status. The vacancy announcement also contained several duties that matched the duties and responsibilities section of the position description for the excepted service position to which the individual had been appointed in 1994. In addition, the vacancy announcement cited three selective factors that were to be used in evaluating the applicants’ qualifications. Two of the three factors matched the factors contained in the position description for the excepted position to which the individual had been appointed, and the third factor—concerning knowledge of the authorization and appropriations process—matched the work experience cited by the individual on application documents. Documents contained in AID’s files indicated that at least 15 persons, including the individual, applied for the competitive service position. Five of the 15, including the individual, were determined to be among the best qualified. OPM reviewed the qualifications and rated and ranked four of those five, including the individual, since the four did not have status. OPM’s rating and ranking resulted in a certificate of eligibles that showed the individual as ranked highest among the four and therefore eligible for selection. Regulations in this situation are that an agency may select from the top three rated and ranked eligibles on the OPM certificate, except that an agency should normally not bypass a preference-eligible veteran.(None of the four persons rated and ranked by OPM claimed veterans’ preference points in this case.) In our opinion, the individual’s chances of being placed among the top three could have been enhanced by the similarities between the vacancy announcement and the individual’s work experience. The fifth person found to be best qualified did have status; therefore, OPM did not review that person’s qualifications. AID personnel staff placed this person’s name alone on a separate certificate of eligibles from which the selection could also have been made. This case involved actions taken by the Department of Energy (DOE) to (1) appoint a former congressional employee to a 2-year limited term Senior Executive Service (SES) position in order to fill a purported critical vacancy, (2) approve detailing the employee from that position to another position shortly after appointing him, (3) select this individual about 10 months later for a career SES appointment to a specific position, and (4) reassign the individual to another SES position the same day his career appointment became effective. We believe that the circumstances surrounding DOE’s actions could give the appearance that a bona fide need for the initial limited term SES position may not have DOE did not intend for the employee to serve in the position for which he was initially selected. On November 18, 1994, DOE’s Assistant Secretary for Energy Efficiency and Renewable Energy requested the Department’s Executive Resources Division to appoint a former congressional employee to the position of Deputy Assistant Secretary for Building Technologies. According to the employee’s application for federal employment, his position on a congressional committee had been abolished. The request was for a limited term SES appointment and was purported to be needed to fill a critical vacancy that occurred when the incumbent went on an Intergovernmental Personnel Act (IPA) assignment. The term appointment was not to exceed January 3, 1997, or the date when the incumbent returned from the IPA assignment. The Executive Resources Board (ERB) approved the request on November 21, 1994, pending the allocation of the limited term SES position by OPM. DOE received approval from OPM in a letter dated January 4, 1995, from the Chief of Staff for the Director of OPM. DOE appointed the employee to the limited term SES position effective January 4, 1995. An agency may make a limited term appointment without the use of merit staffing procedures, but the appointee must meet the qualification requirements for the position (see 5 CFR 317.603). Although the limited term position and appointment were to fill a critical need, within 2 weeks of his appointment, the ERB approved the detailing of the employee to the position of Deputy Assistant Secretary for House Liaison, Assistant Secretary for Congressional and Intergovernmental Affairs. However, agency documents contained in his OPF show that the detail was not officially effected until April 20, 1995, approximately 4 months after he received his limited term appointment. During the employee’s detail, DOE advertised the position for Deputy Assistant Secretary for Building Technologies as a career SES appointment. The vacancy announcement was advertised from July 12, 1995, to August 9, 1995. The employee applied for the position on August 7, 1995. DOE’s Merit Staffing Committee evaluated the applicants and made a final determination on October 18, 1995. Seventeen applications were received, and 4 of the 17 applicants were determined not to be qualified. Of the remaining 13 qualified applicants, the Committee rated 1 superior, 5 very good, and 4 acceptable. Three were found qualified as noncompetitive referrals. The employee received the superior rating. The applicants rated superior and very good were referred to the selecting official as the best qualified. The employee was approved for selection for the career SES appointment as Deputy Assistant Secretary for Building Technologies in the Office of Energy Efficiency and Renewable Energy on December 18, 1995, subject to OPM’s certification of his managerial qualifications. On the same day, however, DOE approved a request to reassign the employee to the position of Principal Deputy Assistant Secretary in the Office of Fossil Energy. Because this was the employee’s first career SES appointment, his executive/managerial qualifications needed to be certified by a Qualifications Review Board (QRB) convened by OPM. Federal personnel law requires that the qualifications of an individual selected for a career appointment to the SES for the first time must be certified by a QRB. On January 4, 1996, DOE submitted a request for certification to OPM. DOE’s submission requested approval of the candidate’s qualifications under 5 U.S.C. 3393(c)(2)(A)—“consideration of demonstrated executive experience.” OPM notified DOE on January 30, 1996, that the QRB disapproved the certification because it found that two of the five executive core qualifications—Human Resources Management and Resources Planning and Management—were not supported at the executive level in the submission. DOE resubmitted a request for approval to OPM on February 15, 1996. The new request included a revised Standard Form 171, which expanded upon the employee’s work experience; several letters of endorsement from senior DOE executives; and an individual development plan (IDP) for the employee. Also, in the new submission, DOE requested approval of the employee’s qualifications under 5 U.S.C. 3393(c)(2)(C). This section provides for “sufficient flexibility to allow for the appointment of individuals who have special or unique qualities which indicate a likelihood of executive success and who would not otherwise be eligible for appointment.” A DOE official told us that the department believed the employee possessed the special qualities called for under 3393(c)(2)(C). According to OPM, 5 U.S.C. 3393(c)(2)(C) authority focuses on the qualifications of the applicant and is used when an individual brings unique technical qualifications to the position that offset the absence of some general managerial qualifications. OPM notified DOE on February 20, 1996, that a QRB certified the employee under 3393(c)(2)(C). On February 21, 1996, DOE approved a request to reassign the employee from his career SES appointment as Deputy Assistant Secretary for Building Technologies, Office of Energy Efficiency and Renewable Energy, to the position of Principal Deputy Assistant Secretary for Fossil Energy. The effective date was March 3, 1996. A DOE official told us this reassignment was made because there was a greater need to fill the latter position. On March 8, 1996, DOE selected another candidate for the Deputy Assistant Secretary for Building Technologies position. This candidate had been among the best qualified when the former congressional appointee was originally selected for that position. However, this candidate declined the offer in May 1996. DOE readvertised the position in September 1996 and selected another individual to fill the position in July 1997. This case involved actions taken by the Department of Commerce to select a noncareer SES employee for a career SES appointment in a vacated position that had been authorized to be advertised and filled by the same noncareer SES employee. Shortly after receiving the career appointment, the individual was reassigned to another SES position in the Department. The circumstances surrounding these actions could give the appearance that a bona fide need for the initial SES position may not have existed. The employee was hired as a Schedule C, Confidential Assistant in the immediate office of the Secretary of Commerce on February 2, 1993. She was appointed to a noncareer SES position on March 3, 1993, in the Department’s Office of the Assistant Secretary for Administration. Her position title was Deputy Assistant Secretary for Administration, with responsibilities for space allocations, parking, and virtually all other administrative matters, including human resources. According to the Director for Human Resources Management at the Department of Commerce, in the latter part of 1995, the noncareer SES appointee approved the filling of a vacant career SES position. The Director, who worked for the political appointee, investigated the position and informed the political appointee that she did not think the position should be posted and filled for the following reasons: Commerce was trying to reduce its number of SES positions. The agency in which the position was located had a strong administration and did not need another executive position. The position created an additional layer over other administrative positions at the agency, which created further concern about the need for the position. According to the Director, although the political appointee was aware of her concerns, the political appointee decided to post the position anyway. The position was advertised from August 28, 1995, to September 18, 1995, and was open to all qualified applicants. Commerce received 12 applications. One of the applicants was the political appointee. The Director told us she was unaware that the political appointee had intended to apply for the position. After learning of this, the Director sent all the applications to the Bureau of Census so that its personnel office could do the merit staffing and ranking process. This was done to avoid any appearance of impropriety, because the political appointee was the Director’s “noncareer” supervisor. Of the 12 applicants, 4 were disqualified in the preliminary screening for failing to address all of the qualification requirements, and 4 others were deemed not qualified for the position. The screening panel ranked the political appointee as “highly qualified” and ranked the other three as “qualified.” All four were referred to the selecting official, who selected the noncareer SES appointee for the position. Commerce sent the employee’s qualifications for the career SES appointment to OPM to be certified by a QRB, the last step in the SES merit staffing process. OPM also conducted a merit staffing review of this appointment as part of its oversight of conversions of political appointees to career positions. OPM concluded that the staffing process appeared to have been conducted in conformance with all applicable laws and regulations. The career appointment was effected on January 21, 1996. On March 31, 1996, approximately 2 months after being appointed, the former political appointee was reassigned to another career SES position in another agency of the Department of Commerce. In March 1996, the OPM Chief of Staff—who was holding a noncareer SES appointment to that position—obtained a career SES appointment to the position of Director, Partnership Center. The Partnership Center position resulted from an OPM study, and the Chief of Staff was the highest ranking official on the task force that performed the study and recommended creation of the Partnership Center. His selection to the position was made by the OPM Director. These circumstances, we believe, could give the appearance of favoritism in the Chief of Staff’s selection over other applicants for the position and created an unfavorable situation for OPM in which, as the government’s principal agency charged with governing the merit selection process, it placed itself in a position in which the merits of its own personnel actions were subject to question. In response to the administration’s National Performance Review call for “reinventing” government, in 1994 the Director of OPM established the OPM Redesign Task Force to study the organizational structure of OPM and to recommend a design for the OPM of the future. Members of the task force included OPM employees from management, employee groups, and unions. The highest ranking member was the Director’s Chief of Staff. In August 1994, the task force proposed to the OPM Director that a number of OPM service “centers” be created. One of the proposed centers was the Partnership Center, which was intended to aid and encourage government managers and government employee union officials to work together—in partnership—in addressing government employment issues. According to the OPM Director, the task force recommendations were referred to an OPM Business Council to work on implementation issues and to propose modifications as necessary. The Chief of Staff was a member of the Business Council, but according to the OPM Director, the Chief of Staff was not a member of the Business Council subgroup that was working on the Partnership Center proposal. The Business Council completed its implementation plan in December 1994, and in January 1995, the Director announced to OPM employees the plan for redesigning OPM, including the establishment of the Partnership Center. According to a report by the OPM Inspector General (IG), the Center’s business was to be handled by the Chief of Staff with assistance from the OPM Director of Program Management until OPM decided whether to provide permanent staff to the Center. In October 1995, after internal conditions stabilized, OPM decided to recruit for several SES positions, including the Director, Partnership Center. The Chief of Staff, along with other individuals, applied for the Partnership Center position and was rated by OPM’s ERB as among the best qualified for the position. As the selecting official, the OPM Director received the best qualified list; from among those on the list, he selected the Chief of Staff for the position. Because it would be the Chief of Staff’s first career appointment into an SES position, OPM convened a QRB, which was composed of SES members from other agencies, to review the Chief of Staff’s qualifications for the appointment. The QRB considered him highly qualified. Concerns raised by the media about the selection of the OPM Chief of Staff for the position of Director, Partnership Center, included claims that the Chief of Staff was preselected for the position and that he had used his political connections to “burrow” into a career government appointment in order to obtain job security that is not afforded political appointees. We also had concerns about the selection, because the Chief of Staff appeared to play a key role in helping to create the position. His selection may have had a negative effect on other agencies’ views on OPM as the lead organization for ensuring that government agencies follow merit system principles. Partially as a result of the published criticism in this case, OPM’s IG reviewed the case. The IG found that there were some administrative oversights in the case that were common to many SES appointments within OPM, but concluded that there was no legal or regulatory impropriety regarding the career appointment of the individual in this case. From our examination of the case, we also concluded that there was no evidence of legal or regulatory impropriety. However, the appearance of favoritism or preselection cannot be easily dismissed. According to OPM officials, the Chief of Staff had previously worked closely with the OPM Director in a similar position for another government agency, and the OPM Director recruited him for the position of Chief of Staff at OPM. He worked closely with the Director of OPM for 3 years as the Director’s Chief of Staff, and he was selected for the career position by the OPM Director. High ranking agency officials told us that they were surprised that the political appointee applied, and was selected, for the position because of the potential negative perceptions that the public may have acquired in this case. Nevertheless, the agency officials advised us that political appointees are not prohibited from applying, or being selected, for career appointments in the government; in this case, they believed the individual was the most qualified applicant for the position. In this case, the Department of Veterans Affairs’ (VA) actions in making a career appointment to an SES position could give the appearance that the selected Schedule C employee received preferential treatment when VA decided to reopen the competition for the position. The Schedule C employee had served as a GS-15 Special Assistant to the Secretary of VA from March 1993 to February 1996 before receiving a career SES appointment as Deputy Assistant Secretary for Congressional Affairs on February 11, 1996. VA issued a vacancy announcement for a career SES appointment to the position of Deputy Assistant Secretary for Congressional Affairs in February 1995. The vacancy announcement was opened from February 22, 1995, to March 7, 1995, and sought applications from all persons qualified within the federal government. According to an OPM document, a VA official said that, just after the first announcement closed, the VA Assistant Secretary for Congressional Affairs learned about some potential candidates who had not applied. VA decided to reopen the announcement for applications from April 5, 1995, to April 18, 1995. Candidates from the first and second announcements were considered together after the April 18, 1995, closing date. The Schedule C employee who eventually received the appointment had not applied under the first two announcements and had served on the panel that rated the applications submitted under those announcements. The screening panel considered all the minimally qualified candidates and sent a list of 16 highly qualified candidates to VA’s ERB panel for consideration. The ERB ranked the candidates referred and identified the five best qualified candidates, and then referred its list to the Assistant Secretary for Congressional Affairs, who was the nominating official. The nominating official selected a candidate and referred him to the Secretary for approval. After the Secretary’s approval, the candidate, a White House employee, was offered the appointment, but he declined the offer on July 13, 1995. Rather than selecting one of the other best-qualified candidates, VA readvertised the position from July 26, 1995, to August 8, 1995, to individuals within and outside the federal government. Documentation in VA’s staffing file for this appointment indicated the reason for readvertising the vacancy was that “so much time had passed, and because it was decided that the search for candidates should be broadened . . . .” VA notified previous applicants that they remained under consideration and that there was no need to reapply. The Schedule C employee and an additional 37 other candidates applied for the position under the third vacancy announcement. Since the Schedule C employee had become a candidate, he was replaced on the screening panel. The screening panel again considered all the minimally qualified candidates and sent a list of 20 highly qualified candidates to VA’s ERB. The ERB reviewed the applications, identified the nine best-qualified candidates, and referred them to the same official who would nominate a selection for the VA Secretary’s approval. Of the nine, four had been on the best-qualified list developed from the earlier vacancy announcements; and five, including the Schedule C employee, were new. The Schedule C employee, who at one time served as the congressional liaison for a veterans’ organization, was nominated for selection. On approval of the Schedule C employee’s selection by the VA Secretary on October 12, 1995, his qualifications for the appointment were sent to OPM for certification by a QRB. Because of the sensitivity of staffing actions involving conversions of political appointees to career appointments, OPM conducted a merit staffing review before submitting this case to a QRB. OPM concluded that the staffing process appeared to have been conducted in conformance with all applicable laws and regulations and forwarded the Schedule C employee’s qualifications to the QRB. The QRB certified that the employee was qualified for the SES appointment and informed VA that he could receive a career appointment in the SES. The appointment was effected on February 11, 1996. Department of Defense (Office of the Secretary) Merit staffing plans of each agency An individual must (1) serve for at least 3 years in the legislative branch and be paid by the Secretary of the Senate or the Clerk of the House of Representatives; (2) be involuntarily separated without prejudice from the legislative branch; (3) pass a suitable noncompetitive examination (i.e., be qualified for the position being sought); and (4) transfer to the career position within 1 year of being separated from the legislative branch. OPM shall, in consultation with the various qualification review boards, prescribe criteria for establishing executive qualifications for appointment of career appointees. The criteria shall provide for (1) consideration of demonstrated executive experience, (2) consideration of successful participation in a career executive development program that is approved by OPM, and (3) sufficient flexibility to allow for the appointment of individuals who have special or unique qualities that indicate a likelihood of executive success and who would not otherwise be eligible for appointment. Each career appointee shall meet the executive qualifications of the position to which appointed, as determined in writing by the appointing authority. Noncompetitive hiring authority for positions other than those of a confidential or policy-determining character for which it is impractical to examine. An agency may appoint by reinstatement to a competitive service position a person who previously was employed under career or career-conditional appointment (or equivalent). There is no time limit to the reinstatement eligibility of a preference-eligible or a person who completed the service requirement for career tenure. An agency may reinstate a nonpreference-eligible who has not completed the service requirement for career tenure only within 3 years following the date of separation. This time limit begins to run from the date of separation from the last position in which the person served under a career appointment, career-conditional appointment, indefinite appointment in lieu of reinstatement, or an appointment under which the person acquired competitive status. The 3-year limit can be extended for certain intervening service. A career appointee who is appointed by the president to any civil service position outside the SES and who leaves the position for reasons other than misconduct, neglect of duty, or malfeasance shall be entitled to be placed in the SES if the appointee applies to OPM within 90 days after separation from the presidential appointment. Personnel Practices: Improper Personnel Actions on Selected CPSC Appointments (GAO/GGD-97-131, June 27, 1997). Hiring of Former IRS Employees by PBGC (GAO/GGD-97-9R, Oct. 2, 1996). Personnel Practices: Career Appointments of Legislative, White House, and Political Appointees (GAO/GGD-96-2, Oct. 10, 1995). Personnel Practices: Selected Characteristics of Recent Ramspeck Act Appointments (GAO/T-GGD-95-173, May 24, 1995). An Overview of Ramspeck Act Appointments (GAO/T-GGD-95-155, May 8, 1995). Personnel Practices: Presidential Transition Conversions and Appointments: Changes Needed (GAO/GGD-94-66, May 31, 1994). Political Appointees: Turnover Rates in Executive Schedule Positions Requiring Senate Confirmation (GAO/GGD-94-115FS, Apr. 21, 1994). Political Appointees: 10-Year Staffing Trends at 30 Federal Agencies (GAO/GGD-93-74FS, April 30, 1993). Personnel Practices: Career Appointments Granted Political Appointees From Jan. Through Nov. 1992 (GAO/GGD-93-49FS, Jan. 22, 1993). Personnel Practices: Schedule C and Other Details to the Executive Office of the President (GAO/GGD-93-14, Nov. 6, 1992). Political Appointees: Number of Noncareer SES and Schedule C Employees in Federal Agencies (GAO/GGD-92-101FS, June 8, 1992). Personnel Practices: Details of Schedule C Employees to the White House (GAO/T-GGD-92-28, Apr. l9, 1992). Personnel Practices: Propriety of Career Appointments Granted Former Political Appointees (GAO/GGD-92-51, Feb. 12, 1992). Personnel Practices: The Department of Energy’s Use of Schedule C Appointment Authority (GAO/GGD-90-61, Mar. 8, 1990). Political Appointees in Federal Agencies (GAO/T-GGD-90-4, Oct. 26, 1989). Federal Employees: Appointees Converted to Career Positions, January and February 1989 (GAO/GGD-89-89FS, June 13, 1989). Federal Employees: Appointees Converted to Career Positions, October Through December 1988 (GAO/GGD-89-66FS, Apr. 24, 1989). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on the appointments of 36 former political appointees and legislative branch employees to positions in the executive branch between January 1996 and March 1997, focusing on whether: (1) appropriate authorities were used and proper procedures were followed in appointing former political appointees and legislative branch employees; and (2) the circumstances surrounding any of the appointments gave the appearance of favoritism or preferential treatment in the appointment process, even if proper procedures were followed. GAO did not independently determine whether the 36 employees were qualified for the positions to which they were appointed. GAO noted that: (1) on the basis of GAO's review of relevant personnel files and documents and discussions with agency officials, GAO believes the 18 agencies that provided career appointments to the 36 former political appointees and legislative branch employees used the appropriate appointment authority to hire each of them and followed proper procedures in making the appointments; (2) although the apropriate appointment authorities were used, the reference citations on the effecting documents for 3 of the 36 appointments were incorrect; (3) personnel officials from the employing agencies stated that the incorrect citations were due to administrative error and that corrections would be made; (4) the three appointments did not involve circumstances that, in GAO's opinion, could give the appearance of favoritism or preferential treatment; (5) however, notwithstanding use of the appropriate authority and proper procedures, the circumstances surrounding six of the appointments could, in GAO's opinion, give the appearance that the appointees had received favoritism or preferences that enhanced the appointees' prospects of appointment; (6) for example, in two cases, the vacancy announcements for the positions to be filled, which outlined the qualifications (e.g., work experience) that the agencies were seeking from applicants, appeared tailored to include specific work experiences possessed by the two appointees; (7) under such circumstances, one would expect these applicants to fare very well in the qualifications review portion of the appointment process, which they did; and (8) the remaining 30 appointments did not raise comparable questions of the appearance of favoritism or preference.
Ports play an important role in the nation’s economy and security. Ports are used to import and export cargo worth hundreds of billions of dollars, generating jobs, both directly and indirectly, for Americans and our trading partners. Ports, which include inland waterways, are used to move bulk agricultural, mineral, petroleum, and paper products. In addition, ports are also used to move cargo containers (as shown in fig. 1)—one of the most important segments of global commerce, accounting for 90 percent of the world’s maritime cargo. In 2002, approximately 7 million containers arrived in U.S. seaports, carrying more than 95 percent of the nation’s non-North American trade by weight and 75 percent by value. Ports also contribute to the economy through recreational activities such as boating, fishing, and cruises. As an indication of the economic importance of ports, a 2002 simulation of a terrorist attack at a port led to the temporary closure of every seaport in the United States and resulted in an estimated loss of $58 billion in revenue to the U.S. economy, including spoilage, loss of sales, manufacturing slowdowns, and halts in production. Ports are also important to national security because they host naval bases and vessels, facilitate the movement of military equipment, and supply troops deployed overseas. Since the terrorist attacks of September 11, the nation’s 361 seaports have been increasingly viewed as potential targets for future terrorist attacks. Ports are vulnerable because they are sprawling, interwoven with complex transportation networks, close to crowded metropolitan areas, and easily accessible. Ports and their maritime approaches facilitate a unique freedom of movement and flow of goods while allowing people, cargo, and vessels to transit with relative anonymity. Because of their accessibility, ports are vulnerable to a wide variety of types of attacks. Cargo containers—mentioned above as important to maritime commerce—are a potential conduit for terrorists to smuggle weapons of mass destruction or other dangerous materials into the country. Finally, ports contain a number of specific facilities that could be targeted by terrorists, including military vessels and bases, cruise ships, passenger ferries, terminals, dams and locks, factories, office buildings, power plants, refineries, sports complexes, and other critical infrastructure. The responsibility for protecting ports from a terrorist attack is a shared responsibility that crosses jurisdictional boundaries, with federal, state, and local organizations involved. For example, at the federal level, the Department of Homeland Security (DHS) has overall homeland security responsibility, and the Coast Guard, an agency of the department, has lead responsibility for maritime security. Other federal departments that may be involved include the Department of Defense (DOD) and DOJ. The Coast Guard and other federal agencies share their security responsibilities with several local stakeholder groups. Some port authorities, operated privately or by the state or local government, have responsibility for protecting certain facilities in and around ports. Port authorities provide protection through designated port police forces, private security companies, and coordination with local law enforcement agencies. Private sector stakeholders play a major role in identifying and addressing the vulnerabilities in and around their facilities, which may include oil refineries, cargo facilities, and other property adjacent to navigable waterways. Information sharing among federal, state, and local officials is central to port security activities. The Homeland Security Act of 2002 and several congressionally chartered commissions call attention to the importance of sharing information among officials from multiple jurisdictions as a way to prevent or respond to a terrorist attack. The act recognizes that the federal government relies on state and local personnel to help protect against terrorist attacks, and these officials need homeland security information to prevent and prepare for such attacks. One of the congressionally chartered commissioned reports—the 9/11 Commission Report—placed emphasis on the importance of sharing information among federal and nonfederal entities as a means of deterring a terrorist attack in the future. In January 2005, we designated information sharing for homeland security as a high-risk area because the federal government still faces formidable challenges in gathering, identifying, analyzing, and disseminating key information within and among federal and nonfederal entities. Information sharing between federal officials and nonfederal officials can involve information collected by federal intelligence agencies. In order to gain access to classified information, state and local law enforcement officials generally need to apply for and receive approval to have a federal security clearance. Presidential Executive Order 12968, Access to Classified Information, dated August 1995, established federal criteria for granting access to classified information. As implemented by the Coast Guard, the primary criterion for granting access to classified information is an individual’s “need to know,” which is defined as the determination made by an authorized holder of classified information that a prospective recipient requires access to specific classified information in order to perform or assist in a lawful and authorized governmental function. To obtain a security clearance, an applicant must complete a detailed questionnaire that asks for information on all previous employment, residences, and foreign travel and contacts that reach back 7 years. After submitting the questionnaire, the applicant then undergoes a variety of screenings and checks by the Coast Guard Security Center. The Office of Personnel Management conducts background investigations on the applicant. The Maritime Transportation Security Act, passed in the aftermath of the September 11 attacks and with the recognition that ports contain many potential security targets, provided for area maritime security committees to be established by the Coast Guard at ports across the country. A primary goal of these committees is to assist the local Captain of the Port—the senior Coast Guard officer who leads the committee—to develop a security plan—called an area maritime security plan—to address the vulnerabilities and risks in that port zone. In developing these plans, the committees serve as forums to communicate with stakeholders from federal agencies, state and local governments, law enforcement, and private industries in an effort to gain a comprehensive perspective of security issues at a port location. The committees also serve as a link for communicating threats and disseminating security information to port stakeholders. In all, the Coast Guard ultimately organized 43 area maritime security committees, covering the nation’s 361 ports. Besides the Coast Guard, federal agencies such as the Customs and Border Protection, FBI, or Maritime Administration may be part of the committee. State, local, and industry members could include officials from port authorities, oil refineries, and local police or fire departments. Appendix II lists the various stakeholder groups that may be eligible. To supplement the statutory and regulatory framework of the committees, the Coast Guard developed specific guidelines on communication and collaboration among committee members. This guidance emphasizes the importance of information in successfully implementing security measures and recognizes that the committee structure allows stakeholders to identify other federal, state, and local agencies that are simultaneously developing security standards for other critical infrastructure, such as bridges and airports. The guidance tasks the committee with developing information sharing procedures for various situations, including relaying instances of suspicious activity to appropriate authorities and communicating to port stakeholders threat information, among other things. Another approach at improving information sharing and port security operations involves interagency operational centers—command centers that bring together the intelligence and operational efforts of various federal and nonfederal participants. These centers provide intelligence information and real-time operational data from sensors, radars, and cameras at one location to federal and nonfederal participants 24 hours a day. The three current centers are in Charleston, South Carolina; Norfolk, Virginia; and San Diego, California. Two of the centers (Norfolk and San Diego) are located in ports that have a substantial number of vessels and facilities operated by the Department of the Navy. The third center (Charleston) is located at a port that moves military equipment in and out of the port, and it is a major container cargo port. The development of interagency operational centers represents an effort to improve awareness of incoming vessels, port facilities, and port operations. In general, these centers are jointly operated by federal and nonfederal law enforcement officials. The centers can have command and control capabilities that can be used to communicate information to vessels, aircraft, and other vehicles and stations involved in port security operations. While area maritime security committees and interagency operational centers are port-level organizations, they are supported by, and provide support to, a national-level intelligence infrastructure. National-level departments and agencies in the intelligence and law enforcement communities may offer information that ultimately could be useful to members of area maritime security committees or interagency operational centers at the port level. These intelligence and law enforcement agencies conduct maritime threat identification and dissemination efforts in support of tactical and operational maritime and port security efforts, but most have missions broader than maritime activities as well. In addition, some agencies also have regional or field offices involved in information gathering and sharing. See appendix III for a description of the departments and agencies or components involved in maritime information sharing at the national and port levels. Area maritime security committees have improved information sharing among port security stakeholders, and made improvements in the timeliness, completeness, and usefulness of information. The types of information shared include assessments of vulnerabilities at specific port locations, information about potential threats or suspicious activities, and strategies the Coast Guard intends to use in protecting key infrastructure. These efforts at sharing information generally did not exist prior to the creation of area maritime security committees. At the ports we visited, the collaboration and sharing of information between committee members reflected the different types of stakeholders and variations in the information needs of each port location. While improvements were noted, it is too early to determine if any one port has developed a better structure for information sharing than another, because the committees have only been operating for just over a year. Area maritime security committees have provided a structure to improve the timeliness, completeness, and usefulness of information sharing. For example, a primary function served by the committees was to develop security plans for port areas—called area maritime security plans. The goal of these plans was to identify vulnerabilities to a terrorist attack in and around a port location and to develop strategies for protecting a wide range of facilities and infrastructure (as shown in fig. 2). In doing so, the committees established new procedures for sharing information by holding meetings on a regular basis, issuing electronic bulletins on suspicious activities around port facilities, and sharing key documents, including vulnerability assessments and the portwide security plan itself, according to committee participants. These activities did not exist prior to the creation of the committees, and they have contributed to the improvements in information sharing. The area maritime security plan provides a framework for communication and coordination among port stakeholders and law enforcement officials, and identifies strategies for reducing vulnerabilities to security threats in and near ports. It is designed to capture the information necessary to coordinate and communicate security procedures at each maritime security level, complement and encompass facility and vessel security plans, and ultimately be integrated into the National Maritime Security Plan. Coast Guard officials and nonfederal stakeholders we contacted agreed that efforts such as these have improved information sharing. Committee participants we spoke with noted that an essential component that has improved the timeliness of information sharing has been the development of both formal and informal stakeholder networks resulting from the formation of area maritime security committees. As part of the process for developing the plan, the committee identifies critical stakeholders and assembles their contact information, allowing for timely dissemination of relevant information. For example, in the event the Coast Guard learns of a potential and credible threat, the committee would designate who should be contacted, the order in which members should be contacted, and what information the committee provides or receives. Participants in the committees told us that the interactions of committee members have also led to the formation of informal stakeholder networks as committee members encounter other stakeholders with similar concerns and perspectives. The committee also provides a forum for real- time sharing of information between stakeholders through meetings or electronic communications. For example, our discussions with federal and nonfederal officials at the ports of Charleston and Houston indicated that committee members representing private industries were granted access to daily information bulletins that they had not received prior to the formation of area maritime security committees, and these information bulletins have allowed them to stay informed of important Coast Guard decisions. In Houston, the Captain of the Port has used such bulletins to notify and inform local stakeholders of unannounced drills, changes in security levels, and Coast Guard guidance for vessel inspections and voluntary screening. In Charleston, bulletins have been used to share information on closure of waterways, release of new regulations, and methods for preventing a possible terrorist attack. At the ports we visited, committee members noted that their participation has allowed them to disseminate more complete information and receive more useful information in return. Committee members representing the private sector at two of the ports we visited noted an increased willingness to disclose vulnerabilities to federal stakeholders with confidence that the information would be protected. Coast Guard officials noted that access to more complete information regarding vulnerabilities and threats at individual facilities has aided them in mitigating risks. Additionally, having a complete view of vulnerabilities at the port as a whole has been useful in identifying gaps and common security needs. For example, while private sector stakeholders are sharing their written assessments of their vulnerabilities with the Coast Guard, the Coast Guard is, in turn, sharing its strategies for the overall protection of ports against potential terrorist activities. State and local port authority operators and private sector stakeholders commented that the committees have increased their awareness of security issues around the port and that information received from the Coast Guard has been useful in identifying and addressing security concerns at their facilities. Efforts at sharing information prior to the creation of area maritime security committees had not produced such effects. While the committees are required to follow the same guidance regarding their structure, purpose, and processes, each of the committees is allowed the flexibility to assemble and operate in a way that reflects the needs of its port area. Each port is unique in many ways, including the geographic area covered and the type of operations that take place there. These port- specific differences influence the number of members that participate, the types of state and local organizations that members represent, and the way in which information is shared. One aspect of this flexibility is the way in which information is channeled to specific stakeholders. The representation of various stakeholders on a committee can cause differences in the type of information that is shared. While committee members from federal agencies may have access to classified information because they have obtained a security clearance, other members may receive a sanitized version of the information or be restricted from participating in certain committee meetings. To mitigate this situation, some committees have formed subcommittees that deal with classified materials such as intelligence reports or details of military deployments. The role stakeholders play in protecting strategic assets or the type of cargo they handle may also affect what types of information they receive as well as what types of information they can share with the committee at large. For example, at one port we visited, the details regarding a shipment of a sensitive material were restricted to committee members that had a direct involvement in the operation. The committees also show marked differences in how their meetings occur, and these differences in turn affect the specific ways in which information is shared. For example, at Baltimore, officials told us that committee meetings are open to the general port community and can draw over 80 participants in addition to the 48 committee members. Coast Guard officials told us that such a large attendance made it difficult to conduct committee business. To include all interested stakeholders in the information network while maintaining a working structure for the committee, the Captain of the Port designated 17 members to an executive committee, while the remaining 31 members served on a general committee. This structure allowed the committee to incorporate a large amount of stakeholder input and to share information with all interested parties while keeping the decision making duties of the committee at a manageable level. In contrast to Baltimore’s 48 members, the Puget Sound area maritime security committee consists of 25 members who each share in decision making. The smaller committee allows for greater familiarity amongst members as well as immediate decision making at meetings because stakeholders with decision making authority are all present. At least two of the other committees we reviewed leveraged existing information sharing networks, such as trade and industry associations, by having Coast Guard officials participate in these groups. For example, at Charleston, Coast Guard officials noted that many of the stakeholders included on the area maritime security committee were already members of a local maritime association that had been operating since 1926. Officials from the Coast Guard and other federal agencies are members of the association and use the group’s meetings as one way of sharing information with stakeholders. Coast Guard officials noted that while this approach may reduce the role and level of participation in the committee, it avoids duplication of efforts and allows the committee to be part of a broader information sharing network. At the port of Houston, the strong presence of the petrochemical industry also made sharing information easier since an association of petrochemical companies was already in place, according to local petrochemical and Coast Guard officials. Regardless of the structures and communication networks a committee adopted, stakeholders at all four locations we reviewed agreed that the committees fostered improved information sharing. We were not able, however, to determine if any of these structures worked better than others for two reasons. First, the different structures reflected the specific needs of each port location. Second, the committees are still in their early stages of operation and more time will be needed before any comparative assessments can be made. Interagency operational centers—command centers where officials from multiple agencies can receive data 24 hours a day on maritime activities— have further improved information sharing at three locations. According to participants at each of these centers, the improvements come mainly from the 24-hour coverage and greater amount of real-time, operational data, which the centers can use in their role as command posts for coordinating multi-agency efforts. The Coast Guard is developing plans to develop its own centers, called sector command centers, as part of an effort to reorganize and improve its awareness of the maritime domain. Some of these sector command centers may be interagency on either a regular or an ad hoc basis. However, the potential relationship between interagency operational centers and the Coast Guard’s new sector command centers remains to be determined, pending a Coast Guard report to Congress. Information sharing at the three existing interagency operational centers (Charleston, Norfolk, and San Diego), represents a step toward further improving information sharing, according to participants at all three centers. They said area maritime security committees have improved information sharing primarily through a planning process that identifies vulnerabilities and mitigation strategies, as well as through development of two-way communication mechanisms to share threat information on an as- needed basis. In contrast, interagency operational centers can provide continuous information about maritime activities and involve various agencies directly in operational decisions using this information. Radar, sensors, and cameras offer representations of vessels and facilities. Other data are available from intelligence sources, including data on vessels, cargo, and crew. For example: In Charleston, four federal agencies (DOJ, Coast Guard, U.S. Customs and Border Protection, and U.S. Immigration and Customs Enforcement) coordinate in a unified command structure, and each of these agencies feeds information into the center. Eight state or local agencies (such as the county sheriff and the state’s law enforcement division) have participants at the center full-time, and eight others participate on an as-needed or part-time basis. Federal and nonfederal officials told us that information sharing has improved, since participants from multiple agencies are colocated with each other and work together to identify potential threats by sharing information. In San Diego, the center is located in a Coast Guard facility that receives information from radars and sensors operated by the Navy and cameras operated by the local harbor patrol. Local harbor patrol officials are colocated with Coast Guard and Navy personnel. Harbor patrol and Coast Guard staff said the center has leveraged their resources through the use of shared information. In Norfolk, the center is staffed with Coast Guard and Navy personnel and receives information from cameras and radars. A Coast Guard Field Intelligence Support Team is colocated at the center and shares information related to the large concentration of naval and commercial vessels in and around the port area with Navy and Coast Guard personnel. According to Coast Guard officials, having a central location where two agencies can receive data from multiple sources on a 24- hour-a-day basis has helped improve information sharing. Greater information sharing among participants at these centers has also enhanced operational collaboration, according to participants. Unlike the area maritime security committees, these centers are operational in nature—that is, they have a unified or joint command structure designed to receive information and act on it. In the three centers, representatives from the various agencies work side by side, each having access to databases and other sources of information from their respective agencies. The various information sources can be reviewed together, and the resulting information can be more readily fused together. Officials said such centers help leverage the resources and authorities of the respective agencies. For example, federal and nonfederal participants collaborate in vessel boarding, cargo examination, and other port security responsibilities, such as enforcing security zones (as shown in fig. 3). If the Coast Guard determines that a certain vessel should be inspected on maritime safety grounds and intends to board it, other federal and nonfederal agencies might join in the boarding to assess the vessel or its cargo, crew, or passengers for violations relating to their areas of jurisdiction or responsibility. The types of information and the way information is shared varies at the three centers, depending on their purpose and mission, leadership and organization, membership, technology, and resources, according to officials at the centers. The Charleston center has a port security purpose, so its missions are all security related. It is led by DOJ, and its membership includes 4 federal agencies and 16 state and local agencies. The San Diego center has a more general purpose, so it has multiple missions to include, not just port security, but search and rescue, environmental response, drug interdiction, and other law enforcement activities. It is led by the Coast Guard, and its membership includes two federal agencies and one local agency. The Norfolk center has a port security purpose, but its mission focuses primarily on force protection for the Navy. It is led by the Coast Guard, and its membership includes two federal agencies and no state or local agencies. As a result, the Charleston center shares information that focuses on law enforcement and intelligence related to port security among a very broad group of federal, state, and local agency officials. The San Diego center shares information on a broader scope of activities (beyond security) among a smaller group of federal and local agency officials. The Norfolk center shares the most focused information (security information related to force protection) among two federal agencies. While the Norfolk center officials said they were planning to broaden the scope of their purpose, mission, and membership, they had not done so at the time of our visit. The centers also share different information because of their technologies and resources. The San Diego and Norfolk centers have an array of standard and new Coast Guard technology systems and access to Coast Guard and various national databases, while the Charleston center has these as well as additional systems and databases. For example, the Charleston center has access to and shares information on Customs and Border Protection’s databases on incoming cargo containers from the National Targeting Center. In addition, Charleston has a pilot project with the Department of Energy to test radiation detection technology, which provides additional information to share. The Charleston center is funded by a special appropriation that allows it to use federal funds to pay for state and local agency salaries. This arrangement boosts the participation of state and local agencies, and thus information sharing beyond the federal government, according to port stakeholders in Charleston. While the San Diego center also has 24-hour participation by the local harbor patrol, that agency pays its own salaries. In addition to the three interagency operational centers we visited, our work has identified other interagency arrangements that facilitate information sharing and interagency operations in the maritime environment. One example is a predesignated single-mission task force, which becomes operational when needed. DHS established the Homeland Security Task Force, South-East—a working group consisting of federal and nonfederal agencies with appropriate geographic and jurisdictional responsibilities that have the mission to respond to any mass migration of immigrants affecting southeast Florida. Task force members (both agencies and individuals) are predesignated, and they have a contingency plan (called Vigilant Sentry) that describes each agency’s specific coordination and mission responsibilities. The task force meets regularly to monitor potential migration events, update the contingency plan, and otherwise coordinate its activities. When a mass migration event occurs, the task force is activated and becomes a full-time interagency effort to share information and coordinate operations to implement the contingency plan. This task force was activated in February 2004 to implement Operation Able Sentry to interdict a mass migration from Haiti. Another example of an interagency arrangement for information sharing can occur in single-agency operational centers that become interagency to respond to specific events. For example, the Coast Guard has its own command centers for both its District Seven and Sector Miami. While these centers normally focus on a variety of Coast Guard missions and are not normally interagency in structure, they have established protocols with other federal agencies, such as Customs and Border Protection and Immigration and Customs Enforcement, to activate a unified or incident command structure should it be needed. For example, the interagency Operation Able Sentry (discussed above) was directed from the Coast Guard’s District Seven command center. Similarly, to respond to a hijacking of a ship, an interagency operation was directed from the Coast Guard’s Sector Miami command center. While an interagency operation might be directed from these Coast Guard command centers, it might be led by another agency with greater interests or resources to respond to an event. For example, this was the case with a recent interagency operation to arrange for the security of dignitaries at an international conference in Miami that was led by Immigration and Customs Enforcement. These Coast Guard centers make it possible to host interagency operations because they have extra space and equipment that allow for surge capabilities and virtual connectivity with each partner agency. Officials from the Coast Guard, Customs and Border Protection, and Immigration and Customs Enforcement in Miami all said that these ad hoc interagency arrangements were crucial to sharing information and coordinating operations. The Coast Guard is planning to develop its own operational centers— called sector command centers—at additional ports. These command centers are being developed to provide local port activities with a unified command as the Coast Guard reorganizes its marine safety offices and groups into unified sectors. In addition, the Coast Guard sector command centers are designed to improve awareness of the maritime domain through a variety of technologies. The Coast Guard is planning to have command centers feed information to the Coast Guard’s two area offices—one on the Pacific Coast and the other on the Atlantic Coast. Over the long term, the Coast Guard plans to have information from sector command centers and area offices channeled to a center at the national level—allowing the Coast Guard to have a nationwide common operating picture of all navigable waters in the country. A Coast Guard official indicated that this nationwide information will be available to other field office commanders at the same time it is given to area and headquarters officials. To develop this nationwide operating picture, the Coast Guard hopes to install equipment that allows it to receive information from sensors, classified information on maritime matters, and data related to ships and crewmembers as part of its expansion plans. Communication from Coast Guard ships and aircraft, as well as federal and nonfederal systems for monitoring vessel traffic and identifying the positions of large ships, would be among the other types of information that could be integrated into a command center. The Coast Guard plans to develop sector command centers at 10 port locations, with potential expansion to as many as 40 port locations. The Coast Guard is currently conducting site surveys to identify locations where it believes centers should be located. For fiscal year 2006, the Coast Guard is requesting funds that support its plans to improve awareness of the maritime domain by, among other things, continuing to evaluate the potential expansion of sector command centers to other port locations. For example, the Coast Guard’s budget request includes $5.7 million to continue developing a nationwide maritime monitoring system, the common operational picture. The common operational picture is primarily a computer software package that fuses data from different sources, such as radar, sensors on aircraft, and existing information systems. The Coast Guard has also requested funding for training personnel in common operational picture deployment at command centers and to modify facilities to implement the picture in command centers. While the total cost of operating command centers is still unknown, the Coast Guard’s Five-Year Capital Investment Plan shows that the capital costs of this effort amount to an estimated $400 million, with acquisition of the system estimated to start in fiscal year 2007. The relationship between the interagency operational centers and the Coast Guard’s sector command centers has not been determined yet. Coast Guard sector command centers can involve multiple agencies, and the Coast Guard has begun using the term “sector command center—joint” for the interagency operational centers in San Diego and Norfolk. Coast Guard officials have told us that their planned sector command centers will be the basis for any interagency operational centers at ports. However, the sector command center we visited, in Sector Miami, was not interagency on a routine basis—the Coast Guard is the single entity operating the center. During our visits to the interagency operational centers, port stakeholders raised the following issues as important factors to consider in any expansion of interagency operational centers: (1) purpose and mission— the centers could serve a variety of overall purposes, as well as support a wide number of specific missions; (2) leadership and organization—the centers could be led by several departments or agencies and be organized a variety of ways; (3) membership—the centers could vary in membership in terms of federal, state, local, or private sector participants and their level of involvement; (4) technology deployed—the centers could deploy a variety of technologies in terms of networks, computers, communications, sensors, and databases; and (5) resource requirements—the centers could also vary in terms of resource requirements, which agency funds the resources, and how resources are prioritized. In a related step, Congress directed the Coast Guard to report on the existing interagency operational centers, covering such matters as the composition and operational characteristics of existing centers and the number, location, and cost of such new centers as may be required to implement maritime transportation security plans and maritime intelligence activities. This report, which Congress called for by February 2005, had not been issued by the time we had completed our work and prepared our report for printing. According to DHS, the report has been written and has been approved by DHS and the Office of Management and Budget (OMB), and is now in the final stages of review at the Coast Guard. Until the report on the centers is issued, it is unclear how the Coast Guard will define the potential relationship between interagency operational centers and its own sector command centers. The lack of security clearances was most frequently cited as a barrier to more effective information sharing among port stakeholders, such as those involved in area maritime security committees and interagency operational centers. The Coast Guard has initiated a security clearance program for members of area maritime security committees. However, the results of the Coast Guard’s efforts have been mixed. For example, only a small percentage of application forms from state, local, and industry officials had actually been submitted by February 2005—over 4 months after the Coast Guard had developed its list of officials. The primary reason given for this was that Coast Guard field office officials did not clearly understand their role in helping nonfederal officials apply for a security clearance. The Coast Guard’s program does not have formal procedures for using data to manage the program, but developing such procedures would allow the Coast Guard to identify and deal with possible problems in the future. Finally, as the Coast Guard moves forward with its state, local, and industry security clearance program, the experience of other federal agencies that manage similar programs suggests that the limited awareness of state, local, and industry officials about the process for obtaining a security clearance could also impede the submission of applications for a security clearance. At the ports we visited, the lack of security clearances was cited as a key barrier to information sharing among participants of area maritime security committees and interagency operational centers we contacted. Port stakeholders involved in the four area maritime security committees consistently stated that the lack of federal security clearances for nonfederal members was an impediment to effective information sharing. Here are several examples: An official of the Washington State Ferries who participates on the Puget Sound area maritime security committee said that not having a security clearance—and therefore the ability to access classified information—affected his ability to carry out security-related activities. He noted that the local U.S. Attorney reported to a local newspaper in the summer of 2004 that suspicious activities had been reported on the state ferry system. The Washington State Ferries official indicated that he or his staff was the source for some of the data but that federal officials would not provide him with more details on the activities because he did not have a security clearance. A Coast Guard field intelligence official corroborated this by stating that the Captain of the Port was unable to share classified information from the U.S. Attorney’s office that indicated a pattern of incidents involving the ferries. Although Coast Guard officials said they wanted to share this information, ferry officials’ lack of a federal security clearance precluded them from doing so. Both Coast Guard and ferry officials indicated that more complete information would aid local security officers in identifying or deterring illegal activities. A senior Maryland state official involved in making budget decisions on improving security around facilities in the port of Baltimore indicated that having a security clearance would aid his ability to make decisions on how the state could more effectively spend its resources on homeland security. He said information on what transportation sectors are probable targets would be a valuable input on where the state should prioritize its spending decisions. A senior Coast Guard official in Houston told us that granting security clearances to selected members of the area maritime security committee would make it easier for nonfederal officials to make decisions on how to respond to specific threats. A local Coast Guard intelligence official cited an example in which classified information could not be shared with port stakeholders. The official told us that there were delays in sharing the information until the originator of the information supplied a sanitized version. Similar to the concerns expressed by area maritime security committee members, participants we contacted at the three interagency operational centers cited the lack of security clearances as a barrier to information sharing. At the center in San Diego, the chief of the local harbor patrol noted that the lack of security clearances was an issue for patrol staff who are involved in the center. Subsequent to raising this issue, DHS sponsored security clearances for 18 harbor patrol officials. At the center in Charleston, participants in the interagency operational center cited the lack of security clearances as a potential barrier to information sharing. The Department of Justice addressed this potential barrier by granting security clearances to nonfederal officials involved in the center. Finally, Coast Guard officials indicated that when nonfederal officials begin working at the interagency operational center in Norfolk, granting security clearances to nonfederal participants will be critical to their success in sharing information. According to the Coast Guard and state and local officials we contacted, the shared partnership between the federal government and state and local entities may fall short of its potential to fight terrorism because of the lack of security clearances. If state and local officials lack security clearances, the information they possess may be incomplete. According to Coast Guard and nonfederal officials, the inability to share classified information may limit their ability to deter, prevent, and respond to a potential terrorist attack. While security clearances for nonfederal officials who participate in interagency operational centers are sponsored by DOJ and DHS, the Coast Guard sponsors security clearances for members of area maritime security committees. For the purposes of our review, we examined in more detail the Coast Guard’s efforts to address the lack of security clearances among members of area maritime security committees. As part of its effort to improve information sharing at ports, the Coast Guard initiated a program in July 2004 to sponsor security clearances for members of area maritime security committees, but nonfederal officials have been slow in submitting their applications for a security clearance. By October 2004, the Coast Guard had identified 359 nonfederal committee members who had a need to know and should receive a security clearance, but as of February 2005, only 28 officials, or about 8 percent, had submitted the application forms for a security clearance. Twenty-four of these officials have been granted an interim clearance, which allows access to classified material while the final clearance is being processed. We interviewed local Coast Guard officials at the four ports we visited to gain a better understanding of the role of the Coast Guard in guiding state and local officials through the process. Our work shows that there were two areas that affected the Coast Guard’s efforts: (1) local Coast Guard officials did not clearly understand their role in the security clearance program and (2) the Coast Guard did not use available data to track the status of security clearances for state and local officials. Coast Guard field office officials said they did not clearly understand their role in helping nonfederal officials apply for a security clearance. In July 2004, Coast Guard headquarters sent guidance to Coast Guard field offices requesting them to proceed with submissions of personnel security investigation packages and to submit the additional names of state and local officials who had a need for a security clearance. However, this guidance evidently was unclear to field office officials. For example, by January 2005—3 months after they submitted names to headquarters— Coast Guard officials at three of the ports we visited were still awaiting further guidance from headquarters on how to proceed. These officials said they thought that headquarters was processing security clearances for nonfederal officials, and they were waiting for notification from headquarters that security clearances had been granted. Our discussions with a Coast Guard field office official at the fourth port location suggest that additional guidance about the process for the state, local, and industry security clearance program could be beneficial. For example, according to this official, membership on area maritime security committees changes over time, and it would be helpful to have guidance on the process for obtaining additional security clearances or dropping clearances for officials who no longer participate on the committees or who no longer have a need to know classified information. This official noted that the process differed depending on whether a committee participant is considered to be a military or civilian official. In early February 2005, we expressed our concerns about the security clearance program to Coast Guard officials. On the basis, in part, of our discussions, Coast Guard headquarters took action and drafted guidance informing its field office officials that they were responsible for contacting nonfederal officials and for providing them with application forms for obtaining a security clearance, according to Coast Guard officials. Additionally, to further clarify the role of field office officials, the Coast Guard’s draft guidance included information on various procedures for obtaining a security clearance. After receiving a draft of this report, the Coast Guard finalized this guidance and sent it to field office officials in early April 2005. Our review of the guidance shows that it clarifies the role of field office officials in administering the security clearance process at the local level and that it provides more detailed procedures on how to check the status of applications that have been submitted for a security clearance. In addition to writing draft guidance on the program, the Coast Guard recently demonstrated that the security clearance program has produced some positive results. For example, in late 2004, the Coast Guard determined the need to share the results of a security study on ferries, portions of which were classified, with some members of an area maritime security committee. Working with Coast Guard field office officials, Coast Guard headquarters and the Coast Guard Security Center were able to process and grant about a dozen security clearances to state, local, and industry officials. As a result, the Coast Guard was able to share classified information with state, local, and industry officials that it believed would help them in carrying out their port security responsibilities. A key component of a good management system is to have relevant, reliable, and timely information available to assess performance over time and to correct deficiencies as they occur. The Coast Guard has two databases that contain information on the status of security clearances for state, local, and industry officials. The first database is a commercial off- the-shelf system that contains information on the status of all applications that have been submitted to the Coast Guard Security Center, such as whether a security clearance has been issued or whether personnel security investigations have been conducted. In February 2004, the Coast Guard began testing the database for use by field staff, and while headquarters has still not granted field staff access to the database, it plans to do so in the future. The second database—an internally developed spreadsheet on the 359 area maritime committee participants— summarizes information on the status of the security clearance program, such as whether they have submitted their application forms and whether they have received their clearances. Although the Coast Guard has databases that could be used to manage the state, local, and industry security clearance program, thus far, it has not developed formal procedures for using the data as a management tool to follow up on possible problems at the national or local level to verify the status of clearances. In regard to the database used by the Security Center, a Coast Guard official told us that the database was not designed to monitor application trends, but instead is used to provide information on individual applicants. The Coast Guard’s internally developed spreadsheet on committee participants who have been deemed candidates for a security clearance, however, does offer information on application trends, and could be used to monitor progress that has been made at the national or local level. For example, updating the database on a routine basis could identify port areas where progress is slow and indicate that follow-up with local field office officials may be needed. In a similar security clearance program, the experience of the FBI shows the utility of data as a tool for managing the program. For example, FBI officials indicated that its databases have served as management tools for tracking state, local, and industry security applications and for monitoring application trends and percentages. The Coast Guard has yet to develop formal procedures for using its data on committee participants as a tool to assess potential problems and to take appropriate action, if necessary. Doing so would likely aid its efforts to manage the state, local, and industry security clearance program at both the local and the national levels. While the Coast Guard’s databases on security clearances shows promise as a tool for monitoring the state, local, and industry security clearance program, the database also has limitations in that it cannot be used to determine how many nonfederal officials have a federal security clearance sponsored by other federal agencies. For example, a Coast Guard official stated that this information is difficult to obtain because the Coast Guard does not have easy access to databases of other agencies. In September 2004, we testified that existing impediments to managing the security clearance process include the lack of a governmentwide database of clearance information, which hinders efforts to provide timely, high- quality clearance determinations. As a way to deal with this problem, a local Coast Guard official sent a survey to port security stakeholders to determine how many stakeholders had security clearances sponsored by other federal agencies. Our prior reviews of DOD and FBI efforts to manage a large number of security clearances for service members, government employees, and industry personnel have demonstrated long-standing backlogs and delays. In addition, our FBI work showed that it is important to address training and education to successfully carry out an effective security clearance program. Our reviews also showed that the use of internal controls to ensure that security clearances are granted in compliance with existing rules and regulations will become increasingly important. The experience of the FBI in managing its security clearance program shows that educating nonfederal officials about the security clearance program resulted in improvements in the processing of applications for a security clearance. The FBI grants security clearances to state and local law enforcement officials who may require access to classified national security information to help prevent or respond to terrorist attacks. After September 11, an increasing number of state and local officials began requesting security clearances to obtain terrorism-related information that might affect their jurisdictions. However, when the FBI received a low percentage of application forms for security clearances from nonfederal officials, the agency consulted with state and local officials to collect their views and recommendations regarding information sharing and improving the security clearance process, and the FBI identified unfamiliarity with the requirements for processing security clearance applications as one of the main impediments to timely processing of applications. For example, some state and local officials said that they did not have adequate guidance for filling out and submitting the appropriate application forms. In response, the FBI widely distributed step-by-step guidance to state and local law enforcement officials through informational brochures (available on a FBI Web site) and meetings with state and local officials, among other efforts. Some law enforcement officials we interviewed stated that the FBI’s guidance and consultation with law enforcement professional organizations helped improve state and local officials’ understanding of the security clearance application process. Once the Coast Guard begins notifying more state, local, and industry officials about the process for obtaining a security clearance, raising the awareness of nonfederal officials about the program could improve the processing of application forms. An official at the field office that had actually contacted state and local officials about the security clearance program indicated that field office officials did not have basic information about the security clearance program. Among other things, he mentioned that informational brochures and Web sites could be given to nonfederal officials as a way to improve their awareness of the security clearance process. Attending to the potential lack of awareness by nonfederal officials about the security clearance program is important because the number of nonfederal officials who submit application forms for a security clearance may be much larger than the several hundred state, local, and industry officials who participate on area maritime security committees. For example, DHS will sponsor an estimated 5,000 security clearances for state, local, and industry officials and the Coast Guard Security Center will process these clearances, according to Coast Guard officials. Additionally, the Coast Guard plans to grant clearances to about 200 other nonfederal officials who are involved in supporting other Coast Guard operations, such as sector command centers. In addition, as the Coast Guard’s security clearance program evolves, participants on area maritime security committees or in sector command centers may change over time, thus highlighting the importance of having ways to raise the awareness of nonfederal officials about the security clearance process. Port security stakeholders cited other barriers to effective information sharing intrinsic to ports we visited, but none of these barriers were mentioned as frequently as the lack of security clearances. At the four ports we visited, characteristics intrinsic to the port, such as their size and complexity were stated as barriers to effective information sharing. In Houston, for example, multiple stakeholders, such as port authorities, numerous jurisdictions, and a diverse set of users, were presented as challenges in information sharing efforts. The length of the Houston Ship Channel (50 miles), with numerous public and private entities using the channel, also complicates information sharing efforts. To deal with the size and complexity of this port area, Coast Guard officials said they have worked with associations representing the commercial fishing industry, petrochemical companies, and state and local law enforcement as a means to share information about port security with as many users of the port and the Houston Ship Channel as possible. For example, the local Coast Guard officials said they held informational meetings with recreational boating associations and with area maritime security committee participants to inform boaters and other stakeholders of “safety zones”— areas where recreational use of the waterway is prohibited—in the Houston Ship Channel. Another barrier mentioned at another port location was the “cultural” barrier between various members of the area maritime security committees. For example, officials at this port location stated that an informal network has created insiders and outsiders drawing particular distinctions between law enforcement and non-law enforcement officials. Effective information sharing among members of area maritime security committees and participants in interagency operational centers can enhance the partnership between federal and nonfederal officials, and it can improve the leveraging of resources across jurisdictional boundaries for deterring, preventing, or responding to a possible terrorist attack at the nation’s ports. The Coast Guard has recognized the importance of granting security clearances to nonfederal officials as a means to improve information sharing, but progress in moving these officials through the application process has been slow. In the future, the Coast Guard may need to grant additional security clearances to state, local, or industry participants who join area maritime security committees or sector command centers to support counterterrorism programs. In this way, as the Coast Guard’s state, local, and industry security clearance program matures, the importance of effectively managing the security clearance program will become even more important. Increased management attention and guidance about the process would strengthen the program for security clearances, and it would reduce the risk that nonfederal officials may have incomplete information as they carry out their law enforcement activities. To help ensure that nonfederal officials receive needed security clearances as quickly as possible, both now and in the future, we recommend that the Secretary of Homeland Security direct the Commandant of the Coast Guard to take the following two actions. Develop formal procedures so that local and headquarters officials use the Coast Guard’s internal databases of state, local, and industry security clearances for area maritime committee members as a management tool to monitor who has submitted applications for a security clearance and to take appropriate action when application trends point to possible problems. For example, updating the database on a routine basis could identify port areas where progress is slow and indicate that follow-up with local field office officials may be needed. Raise the awareness of state, local, and industry officials about the process of applying for security clearances. This effort could involve using brochures, Web sites, or other information that the FBI has used in its program for educating state and local officials about the security clearance process. We provided a draft of this report to DHS, DOJ, and DOD for comment. DHS, including the Coast Guard, generally agreed with our findings and recommendations. Specifically, DHS noted that our recommendations should enhance the Coast Guard’s efforts to promote information sharing among port security stakeholders. DHS also indicated that changes associated with processing security clearances should overcome identified impediments. DOJ and DOD declined to provide comments. Our draft report included a recommendation that the Coast Guard clarify the role of field office officials in communicating with state, local, and industry officials about the process for obtaining a security clearance. After receiving our draft report, the Coast Guard issued a memo to field office officials that clarified their role in the security clearance program. The Coast Guard’s memo also provided more detailed guidance on the process for sponsoring additional state, local, or industry officials for a security clearance. As a result of the Coast Guard’s action, we have dropped this recommendation from our final report. In regard to interagency operational centers, DHS also noted that the Coast Guard report required by Congress on existing interagency operational centers has been approved by DHS and OMB and is now in the final stages of review at the Coast Guard. In addition to commenting on our findings and recommendations, DHS provided technical comments on the report under separate cover and we revised the draft report where appropriate. Written comments from DHS are reprinted in appendix IV. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will provide copies of this report to appropriate departments and interested congressional committees. We will also make copies available to others upon request. This report will also be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any question about this report, please contact me at (415) 904-2200 or at [email protected] or Stephen L. Caldwell, Assistant Director, at (202) 512-9610 or at [email protected]. Key contributors to this report are listed in appendix V. Each of our objectives involved information sharing between federal agencies and nonfederal stakeholders. Specifically, What impact have area maritime security committees had on information sharing? What impact have interagency operational centers had on information sharing? What barriers, if any, have hindered improvements in information sharing among port security stakeholders? We carried out part of our work at Coast Guard headquarters or in consultation with headquarters officials. We spoke with Coast Guard officials to obtain information on how information is shared within the maritime security community and reviewed pertinent legislation, guidance, rules, and other relevant documents related to the sharing of maritime security information with nonfederal stakeholders. For example, we reviewed pertinent statutes, such as the Maritime Transportation Security Act and the Homeland Security Act. We also reviewed selected maritime security plans, Coast Guard regulations implementing the Maritime Transportation Security Act, and various reports from congressionally chartered commissions related to information sharing. To address our first objective, we conducted structured interviews with officials from federal agencies and representatives from state and local governments, law enforcement agencies, maritime industry associations, and private sector entities who were stakeholders in port security issues. Many of these officials were members of area maritime security committees. These interviews were largely conducted during site visits to four specific maritime port areas. We selected these ports to provide a diverse sample of security environments and perspectives, basing our selections on such matters as geographic location, varying levels of strategic importance, and unique local characteristics. The four port areas and some of our reasons for choosing them are as follows: Baltimore, Maryland: a Mid-Atlantic port that is managed by a state agency and services a variety of cargo, including bulk and container cargo, and cruise passengers; Charleston, South Carolina: a South Atlantic port that is state owned and operated, with three separate facilities and military facilities and installations; Houston, Texas: a Gulf coast port that is governed by an appointed commission and consists of a 25-mile-long complex of diversified public and private facilities, including the nation’s largest petrochemical complex; and Seattle/Tacoma, Washington: a Pacific coast port area that is operated by municipal corporations, represents the third largest container cargo port in the country, and services the country’s largest state-operated passenger ferry system. During each of our visits to these four ports, we met with the identified port stakeholders, Coast Guard marine safety offices, and Captains of the Port. In our meetings with Captains of the Port and marine safety offices, we discussed the creation of and composition of the area maritime security committee at their port and the effectiveness of the committee in facilitating information sharing. We also discussed and collected documents related to policies and procedures pertaining to sharing information with nonfederal stakeholders. We collected documentary evidence in the form of information bulletins that are used to disseminate information to stakeholders. When we met with the nonfederal stakeholders at the ports, we discussed specific security concerns at their facilities or in their jurisdictions and how they were being addressed. We also discussed their involvement and experiences with the local area maritime security committee, of which most were members, and how they receive and share information with federal agencies, particularly the Coast Guard. With both groups, we discussed any perceived barriers to information sharing and ideas and plans to resolve these issues. This information was used in conducting a comparative analysis of the port areas and allowed us to distinguish differences between the locations while identifying common issues. In addressing the second objective, we conducted site visits at the three interagency operational centers located in Charleston, South Carolina; Norfolk, Virginia; and San Diego, California. Related to this, we visited the Homeland Security Task Force South-East and command centers for the Coast Guard district and sector in Miami, Florida because these centers also aim to facilitate information sharing and joint operations related to maritime security. At each location, we conducted structured interviews with officials from the agencies participating in the centers. These interviews allowed us to obtain information regarding the history of the centers and how their missions and structures are changing. Specifically, we discussed how their presence affects information sharing among federal stakeholders as well as with nonfederal stakeholders. We also discussed challenges facing the centers as they become more formalized. During the visits, documents that describe the centers as well as examples of the products they distribute were collected. Observations made at the facilities allowed us to provide context to the testimonial evidence we collected. We also viewed demonstrations of the emerging technologies as well as differences in the physical attributes of each center. The testimonial evidence, aided by our observations, was synthesized and analyzed, allowing us to perform a comparative analysis, identifying differences and commonalities in information sharing among the centers. To address the third objective, we followed up with officials at Coast Guard headquarters and obtained guidance and data regarding the current effort to administer security clearances at the secret level to selected nonfederal stakeholders at each port. In subsequent phone interviews with the officials of marine safety offices at the ports we visited, we discussed problems encountered in the communication and implementation of this effort and steps that are being taken to resolve these problems. In addition, we reviewed Coast Guard documents related to information sharing, such as data on the number of nonfederal officials who had received security clearances, guidance from Coast Guard headquarters to field offices, and other pertinent instructions. In regard to this database, we checked the reliability of the database for the four ports we visited and found that the database was generally accurate. We found that 24 of the 27 entries were correct. In addition, we reviewed prior GAO reports that dealt with information sharing issues. Finally, we interviewed 64 federal and nonfederal stakeholders at the four ports we visited and asked them whether there were any barriers to information sharing. The results of our interviews cannot be projected to all participants on the area maritime security committees. Our review was conducted from May 2004 to March 2005 in accordance with generally accepted government auditing standards. This appendix provides information on the Coast Guard’s guidance for developing the local membership and organization of the area maritime security committee. The Coast Guard’s guidance directs the Captain of the Port to take into account all aspects of each port area and its adjacent waterways and coastal areas. The committees should be composed of federal, state, and local agencies; law enforcement and security agencies; and port stakeholders. Representatives for each aspect of the port and those charged with its regulation or enforcement should be encouraged to participate. Table 1 provides a list of representatives that an area maritime security committee could include. Area maritime security committees are not limited to the agencies and organizations on this list. As each port has specific needs and issues, the membership of committees can vary greatly from port to port. This appendix provides information on the departments and agencies/components involved in maritime information sharing, at both the national level and the regional or field level. Table 2 lists departments and agencies/components (including the Coast Guard) that potentially play a role in disseminating maritime threat information to, and receiving information from, area maritime security committees and interagency operational centers. The Coast Guard, as the lead in domestic maritime security, plays a central role in maritime threat information sharing and has a robust presence at the national, regional, and port levels. In this capacity, it conducts intelligence activities in support of all its missions, maritime homeland security, and national security objectives, including information collection, analysis, and dissemination of intelligence information. Figure 4 illustrates how Coast Guard national and regional maritime information is channeled to and from representatives of a local area maritime security committee (AMSC) or interagency operational center. Beyond the Coast Guard, other agencies can also play a major role in channeling maritime security information to the port level. As shown in table 2, some of these agencies have broader responsibilities for intelligence across all domains. For example, DOJ has a number of organizations involved in terrorist threat information sharing, such as the National Joint Terrorism Task Force, which act as a liaison and conduit for “all domain” (e.g., maritime and nonmaritime) information from FBI headquarters to Joint Terrorism Task Forces operating in the field. The FBI also has designated Maritime Liaison Agents at the port level who interact with state, local, and private sector officials and other federal agencies, to enhance security at the nation’s seaports. In addition, U.S. Attorneys’ Offices of DOJ set up Anti-terrorism Advisory Councils that sponsor state- or regional-level task forces or coordination centers that may include a maritime security component. Figure 5 graphically illustrates (1) how maritime and nonmaritime information and intelligence is shared among agencies at the national level and (2) organizational conduits through which information is shared with the port level. The left side of the figure shows DOJ channels for information discussed above. On the right side, the figure also shows the flow of information through Coast Guard channels, as already shown in figure 4. In addition to those named above, David Alexander, Neil Asaba, Juliana Bahus, Christine Davis, Kevin Heinz, Emily Pickrell, Albert Schmidt, Amy Sheller, Stan Stenersen, and April Thompson made key contributions to this report. Coast Guard: Observations on Agency Priorities in Fiscal Year 2006 Budget Request. GAO-05-364T. Washington, D.C.: March 17, 2005. Homeland Security: Process for Reporting Lessons Learned from Seaport Exercises Needs Further Attention. GAO-05-170. Washington, D.C.: January 14, 2005. Port Security: Better Planning Needed to Develop and Operate Maritime Worker Identification Card Program. GAO-05-106. Washington, D.C.: December 10, 2004. Maritime Security: Better Planning Needed to Help Ensure an Effective Port Security Assessment Program. GAO-04-1062. Washington, D.C.: September 30, 2004. Maritime Security: Partnering Could Reduce Federal Costs and Facilitate Implementation of Automatic Vessel Identification System. GAO-04-868. Washington, D.C.: July 23, 2004. Maritime Security: Substantial Work Remains to Translate New Planning Requirements into Effective Port Security. GAO-04-838. Washington, D.C.: June 30, 2004. Coast Guard: Key Management and Budget Challenges for Fiscal Year 2005 and Beyond. GAO-04-636T. Washington, D.C.: April 7, 2004. Homeland Security: Summary of Challenges Faced in Targeting Oceangoing Cargo Containers for Inspection. GAO-04-557T. Washington, D.C.: March 31, 2004. Coast Guard Programs: Relationship between Resources Used and Results Achieved Needs to Be Clearer. GAO-04-432. Washington, D.C.: March 22, 2004. Homeland Security: Preliminary Observations on Efforts to Target Security Inspections of Cargo Containers. GAO-04-325T. Washington, D.C.: December 16, 2003. Posthearing Questions Related to Aviation and Port Security. GAO-04-315R. Washington, D.C.: December 12, 2003. Maritime Security: Progress Made in Implementing Maritime Transportation Security Act, but Concerns Remain. GAO-03-1155T. Washington, D.C.: September 9, 2003. Homeland Security: Efforts to Improve Information Sharing Need to Be Strengthened. GAO-03-760. Washington D.C.: August 27, 2003. Container Security: Expansion of Key Customs Programs Will Require Greater Attention to Critical Success Factors. GAO-03-770. Washington, D.C.: July 25, 2003. Homeland Security: Challenges Facing the Department of Homeland Security in Balancing its Border Security and Trade Facilitation Missions. GAO-03-902T. Washington, D.C.: June 16, 2003. Coast Guard: Challenges during the Transition to the Department of Homeland Security. GAO-03-594T. Washington, D.C.: April 1, 2003. Transportation Security: Post-September 11th Initiatives and Long- Term Challenges. GAO-03-616T. Washington, D.C.: April 1, 2003. Coast Guard: Comprehensive Blueprint Needed to Balance and Monitor Resource Use and Measure Performance for All Missions. GAO-03-544T. Washington, D.C.: March 12, 2003. Homeland Security: Challenges Facing the Coast Guard as It Transitions to the New Department. GAO-03-467T. Washington, D.C.: February 12, 2003. Coast Guard: Strategy Needed for Setting and Monitoring Levels of Effort for All Missions. GAO-03-155. Washington, D.C.: November 12, 2002. Port Security: Nation Faces Formidable Challenges in Making New Initiatives Successful. GAO-02-993T. Washington, D.C.: August 5, 2002. Combating Terrorism: Preliminary Observations on Weaknesses in Force Protection for DOD Deployments through Domestic Seaports. GAO- 02-955TNI. Washington, D.C.: July 23, 2002.
Sharing information with nonfederal officials is an important tool in federal efforts to secure the nation's ports against a potential terrorist attack. The Coast Guard has lead responsibility in coordinating maritime information sharing efforts. The Coast Guard has established area maritime security committees--forums that involve federal and nonfederal officials who identify and address risks in a port. The Coast Guard and other agencies have sought to further enhance information sharing and port security operations by establishing interagency operational centers--command centers that tie together the efforts of federal and nonfederal participants. GAO was asked to review the efforts to see what impact the committees and interagency operational centers have had on improving information sharing and to identify any barriers that have hindered information sharing. Area maritime security committees provide a structure that improves information sharing among port security stakeholders. At the four port locations GAO visited, federal and nonfederal stakeholders said that the newly formed committees were an improvement over previous information sharing efforts. The types of information shared included assessments of vulnerabilities at port locations and strategies the Coast Guard intends to use in protecting key infrastructure. The three interagency operational centers established to date allow for even greater information sharing because the centers operate on a 24-hour-a-day basis, and they receive real-time information from data sources such as radars and sensors. The Coast Guard is planning to develop its own centers--called sector command centers--at up to 40 additional port locations to monitor information and to support its operations. The relationship between the interagency operational centers and the planned expansion of sector command centers remains to be determined. The major barrier hindering information sharing has been the lack of federal security clearances for nonfederal members of committees or centers. By February 2005--or 4 months after the Coast Guard developed a list of 359 committee members who needed a security clearance--28 of the 359 members had submitted the necessary paperwork for a security clearance. Coast Guard field officials did not clearly understand that they were responsible for contacting nonfederal officials about the clearance process. To deal with this, in early April 2005, the Coast Guard issued guidance to field offices that clarified their role. In addition, the Coast Guard did not have formal procedures that called for the use of data to monitor application trends. Developing such procedures would aid in identifying deficiencies in the future. As the Coast Guard proceeds with its program, another way to improve the submission of paperwork involves educating nonfederal officials about the clearance process.
MDA’s BMDS is being designed to counter ballistic missiles of all ranges—short, medium, intermediate, and intercontinental.ballistic missiles have different ranges, speeds, sizes, and performance characteristics, MDA is developing multiple systems that when integrated provide multiple opportunities to destroy ballistic missiles before they can reach their targets. The BMDS architecture includes space-based and airborne sensors as well as ground- and sea-based radars; ground- and sea-based interceptor missiles; and a command and control, battle management, and communications system to provide the warfighter with the necessary communication links to the sensors and interceptor missiles. Table 1 provides a brief description of 10 BMDS elements and supporting efforts currently under development by MDA. MDA experienced mixed results in executing its fiscal year 2011 development goals and BMDS tests. For the first time in 5 years, we are able to report that all of the targets used in fiscal year 2011 test events were delivered as planned and performed as expected. In addition, the Aegis BMD program’s SM-3 Block IA missile was able to intercept an intermediate-range target for the first time. Also, the THAAD program successfully conducted its first operational flight test in October 2011. However, none of the programs we assessed were able to fully accomplish their asset delivery and capability goals for the year. See table 2 for how each of these programs met some of its goals during the fiscal year. Our report provides further detail on these selected accomplishments. Although some programs completed significant accomplishments during the fiscal year, there were also several critical test failures. These as well as a test anomaly and delays disrupted MDA’s flight test plan and the acquisition strategies of several components. Overall, flight test failures and an anomaly forced MDA to suspend or slow production of three out of four interceptors currently being manufactured. The Aegis BMD SM-3 Block IA program conducted a successful intercept in April 2011, but there was an anomaly in a critical component of the interceptor during the test. This component is common with the Block IB missile. Program management officials stated that the SM-3 Block IA deliveries have been suspended while the failure reviews are being conducted. The Aegis BMD SM-3 Block IB program failed in its first intercept attempt in September 2011. The Aegis program has had to add an additional flight test and delay multiple other flight tests. Program management officials stated that the SM-3 Block IB production has been slowed while the failure reviews are being conducted. The GMD program has been disrupted by two recent test failures. As a result of a failed flight test in January 2010, MDA added a retest designated as Flight Test GMD-06a (FTG-06a). However, this retest also failed in December 2010 because of a failure in a key component of the kill vehicle. As a result of these failures, MDA has decided to halt flight testing and restructure its multiyear flight test program, halt production of the interceptors, and redirect resources to return-to-flight activities. Production issues forced MDA to slow production of the THAAD interceptors, the fourth missile being manufactured. To meet the 2002 presidential direction to initially rapidly field and update missile defense capabilities as well as a 2009 presidential announcement to deploy missile defenses in Europe, MDA has undertaken and continues to undertake highly concurrent acquisitions. While this approach enabled MDA to rapidly deploy an initial capability in 2005 by concurrently developing, manufacturing, and fielding BMDS assets, it also led to the initiation of large-scale acquisition efforts before critical technologies were fully understood and allowed programs to move forward into production without having tests completed to verify performance. After delivering its initial capability in 2005, MDA continued these high-risk practices that have resulted in problems requiring extensive retrofits, redesigns, delays, and cost increases. While MDA has incorporated some acquisition best practices in its newer programs, its acquisition strategies still include high or elevated levels of concurrency that result in increased acquisition risk—including performance shortfalls, cost growth, and schedule delays—for these newer programs. Concurrency is broadly defined as overlap between technology development and product development or between product development and production of a system. This overlap is intended to introduce systems rapidly, to fulfill an urgent need, to avoid technology obsolescence, and to maintain an efficient industrial development and production workforce. However, while some concurrency is understandable, committing to product development before requirements are understood and technologies mature as well as committing to production and fielding before development is complete is a high-risk strategy that often results in performance shortfalls, unexpected cost increases, schedule delays, and test problems. At the very least, a highly concurrent strategy forces decision makers to make key decisions without adequate information about the weapon’s demonstrated operational effectiveness, reliability, logistic supportability, and readiness for production. Also, starting production before critical tests have been successfully completed has resulted in the purchase of systems that do not perform as intended. These premature commitments mean that a substantial commitment to production has been made before the results of testing are available to decision makers. Accordingly, they create pressure to avoid production breaks even when problems are discovered in testing. These premature purchases have affected the operational readiness of our forces and quite often have led to expensive modifications. In contrast, our work has found that successful programs that deliver promised capabilities for the estimated cost and schedule follow a systematic and disciplined knowledge-based approach, in which high levels of product knowledge are demonstrated at critical points in development.require an appropriate balance between schedule and risk and, in practice, programs can be executed successfully with some level of This approach recognizes that development programs concurrency. For example, it is appropriate to order long-lead production material in advance of the production decision, with the pre-requisite that developmental testing is substantially accomplished and the design confirmed to work as intended. This knowledge-based approach is not unduly concurrent. Rather, programs gather knowledge that demonstrates that their technologies are mature, designs are stable, and production processes are in control before transitioning between acquisition phases, which helps programs identify and resolve risks early. It is a process in which technology development and product development are treated differently and managed separately. Technology development must allow room for unexpected results and delays. Developing a product culminates in delivery and therefore gives great weight to design and production. If a program falls short in technology maturity, it is harder to achieve design stability and almost impossible to achieve production maturity. It is therefore key to separate technology from product development and product development from production— and thus avoid concurrency. A knowledge-based approach delivers a product on time, within budget, and with the promised capabilities. See figure 1 for depictions of a concurrent schedule and a schedule that uses a knowledge-based approach. To meet the 2002 presidential direction to initially rapidly field and update missile defense capabilities as well as the 2009 presidential announcement to deploy missile defenses in Europe, MDA has undertaken and continues to undertake highly concurrent acquisitions. Such practices enabled MDA to quickly ramp up efforts in order to meet tight presidential deadlines, but they were high risk and resulted in problems that required extensive retrofits, redesigns, delays, and cost increases. Table 3 illustrates concurrency in past efforts and its associated effects. Among earlier MDA programs, concurrency was most pronounced in the GMD program, where the agency was pressed to deliver initial capabilities within a few years to meet the 2002 presidential directive. The consequences here have been significant, in terms of production delays and performance shortfalls, and are still affecting the agency. In recent years, MDA has taken positive steps to incorporate some acquisition best practices, such as increasing competition and partnering with laboratories to build prototypes. For example, MDA took actions in fiscal year 2011 to reduce acquisition risks and prevent future cost growth in its Aegis BMD SM-3 Block IIA program. The agency recognized that the program’s schedule included elevated acquisition risks, so it appropriately added more time to the program by revising the schedule to relieve schedule compression between its subsystem and system-level design reviews. In addition, it incorporated lessons learned from other SM-3 variants into its development to further mitigate production unit costs. Moreover, for its PTSS program, MDA has simplified the design and requirements. However, table 4 shows that the agency’s current acquisition strategies still include high or elevated levels of concurrency that set many of its newer programs up for increased acquisition risk, including performance shortfalls, cost growth, and schedule delays. In our April 2012 report, we made two recommendations to strengthen MDA’s longer-term acquisition prospects. We recommended that the Secretary of Defense direct the Office of Acquisition Technology and Logistics to (1) review all of MDA’s acquisitions for concurrency and determine whether the proper balance has been struck between the planned deployment dates and the concurrency risks taken to achieve those dates and (2) review and report to the Secretary of Defense the extent to which the directed capability delivery dates announced by the President in 2009 are contributing to concurrency in missile defense acquisitions and recommend schedule adjustments where significant benefits can be obtained by reducing concurrency. DOD concurred with both of these recommendations. In addition, we recommended specific steps to reduce concurrency in several of MDA’s programs. DOD agreed with four of the five missile defense element-specific recommendations and partially agreed with our recommendation to report to the Office of the Secretary of Defense and to Congress the root cause of the SM-3 Block IB developmental flight test failure, path forward for future development, and the plans to bridge production from the SM-3 Block IA to the SM-3 Block IB before committing to additional purchases of the SM-3 Block IB. DOD commented that MDA will report this information to the Office of the Secretary of Defense and to Congress upon completion of the failure review in the third quarter of fiscal year 2012. However, DOD makes no reference to delaying additional purchases until the recommended actions are completed. We maintain our position that MDA should take the recommended actions before committing to additional purchases of the SM-3 Block IB. MDA parts quality issues have seriously impeded the development of the BMDS in recent years. For example, during a THAAD flight test in fiscal year 2010, the air-launched target failed to initiate after it was dropped from the aircraft and fell into the ocean. The test was aborted and a subsequent failure review board investigation identified as the immediate cause of the failure the rigging of cables to the missile in the aircraft and shortcomings in internal processes at the contractor as the underlying cause. This failure led to a delay of the planned test, restructuring of other planned tests, and hundreds of millions of dollars being spent to develop and acquire new medium-range air-launched targets. In another widely- reported example, the GMD element’s first intercept test of its CE-II Ground-Based Interceptor failed and the ensuing investigation determined the root cause of the failure to be a quality control event. This failure also caused multiple flight tests to be rescheduled, delayed program milestones, and cost hundreds of millions of dollars for a retest. In view of the cost and importance of space and missile defense acquisitions, we were asked to examine parts quality problems affecting satellites and missile defense systems across DOD and the National Aeronautical and Space Administration. In June 2011, we reported that parts problems discovered after assembly or integration of the instrument or spacecraft had more significant consequences as they required lengthy failure analysis, disassembly, rework, and reassembly—sometimes resulting in a launch delay. For example, the Space Tracking and Surveillance System program, a space-based infrared sensor program with two demonstration satellites that launched in September 2009, discovered problems with defective electronic parts in the Space-Ground Link Subsystem during system-level testing and integration of the satellite. By the time the problem was discovered, the manufacturer no longer produced the part and an alternate contractor had to be found to manufacture and test replacement parts. According to officials, the problem cost about $7 million and was one of the factors that contributed to a 17-month launch delay of two demonstration satellites and delayed participation in the BMDS testing we reported on in March 2009. Our work highlighted a number of causal factors behind the parts quality problems being experienced at MDA and space agencies. present examples of the parts quality issues we found at MDA below, the June 2011 report also describes the parts quality issues we found with other space agencies. Poor workmanship. For example, poor soldering workmanship caused a power distribution unit to experience problems during vehicle-level testing on MDA’s Targets and Countermeasures program. According to MDA officials, all units of the same design by the same manufacturer had to be X-ray inspected and reworked, involving extensive hardware disassembly. As a corrective action, soldering technicians were provided with training to improve their soldering operations and ability to perform better visual inspections after soldering. The use of undocumented and untested manufacturing processes. GAO-11-404. manufacturing materials, a portion of the material was not returned and was inadvertently used to fabricate manifolds for two complete CE-II Ground-Based Interceptors. The vehicles had already been processed and delivered to the prime contractor for integration when the problem was discovered. Prime contractor’s failure to ensure that its subcontractors and suppliers met program requirements. The GMD program experienced a failure with an electronics part purchased from an unauthorized supplier. According to program officials, the prime contractor required subcontractors to only purchase parts from authorized suppliers; however, the subcontractor failed to execute the requirement and the prime contractor did not verify compliance. At the time of our June 2011 report, MDA had instituted policies to prevent and detect parts quality problems. The programs reviewed in the report—GMD, Aegis BMD, Space Tracking and Surveillance System, and Targets and Countermeasures—were initiated before these recent policies aimed at preventing and detecting parts quality problems took full effect. In addition to new policies focused on quality, MDA has developed a supplier road map database in an effort to gain greater visibility into the supply chain to more effectively manage supply chain risks. In addition, according to MDA officials, MDA has recently been auditing parts distributors in order to rank them for risk in terms of counterfeit parts. MDA also participates in a variety of collaborative initiatives to address quality, in particular, parts quality. These range from informal groups focused on identifying and sharing news about emerging problems as quickly as possible, to partnerships that conduct supplier assessments, to formal groups focused on identifying ways industry and the government can work together to prevent and mitigate problems. Moreover, since our report, MDA has added a new clause in one of its GMD contracts to provide contractor accountability for quality. We have not yet fully assessed the clause but it may allow the contracting officer to make an equitable reduction of performance incentive fee on two contract line items for certain types of quality problems. This new clause shows some leadership by MDA to hold contractors accountable for parts quality. But, we do not yet know what the impact of this clause will be on improving MDA’s problems with parts quality. Our June 2011 report recommended greater coordination between government organizations responsible for major space and missile defense programs on parts quality issues and periodic reporting to Congress. DOD partially concurred with our recommendation for greater coordination but responded that it would work with the National Aeronautics and Space Administration to determine the optimal government-wide assessment and reporting implementation to include all quality issues, of which parts, materials, and processes would be one of the major focus areas. In addition, DOD proposed an annual reporting period to ensure planned, deliberate, and consistent assessments. We support DOD’s willingness to address all quality issues and to include parts, materials, and processes as an important focus area in an annual report. DOD further stated that it had no objection to providing a report to Congress, if Congress wanted one. We believe that DOD should proactively provide its proposed annual reports to Congress on a routine basis, rather than waiting for any requests from Congress, which could be inconsistent from year to year. The parts quality issues will require sustained attention from both the executive and legislative branches to improve the quality of the systems in development, particularly because there are significant barriers to addressing quality problems, such as an increase in counterfeit electronic parts, a declining government share of the overall electronic parts market, and workforce gaps within the aerospace sector. In conclusion, as the MDA completes a decade of its work, it continues to make progress in delivering assets, completing intercept tests, and addressing some of the quality issues that have plagued it in the past. This year, there were significant accomplishments, such as the successful operational test for THAAD, but also setbacks, including failed tests and their aftermath. Such setbacks reflect inherent risks associated with the challenging nature of missile defense development, but they are also exacerbated by strategies that adopt high levels of concurrency that leave decision makers with less knowledge than needed to move programs forward. Given that initial capabilities are now in place and broader fiscal pressures require sound and more efficient management approaches, it is now time for DOD to reassess MDA's strategy of accelerating development and production to determine whether this approach needs to be rethought for current and future BMDS programs. Chairman Nelson, Ranking Member Sessions, and Members of the Subcommittee, this concludes my statement. I am happy to answer any questions you have. For future questions about this statement, please contact me at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this statement include David B. Best, Assistant Director; Meredith Allen Kimmett; Ivy Hubler; Steven Stern; Ann Rivlin; Kenneth E. Patton; Robert S. Swierczek; and Alyssa B. Weir. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In order to meet its mission, MDA is developing a highly complex system of systems—ground-, sea-, and space-based sensors, interceptors, and battle management. Since its initiation in 2002, MDA has been given a significant amount of flexibility in executing the development and fielding of the ballistic missile defense system. This statement addresses progress MDA made in the past year, the challenges it still faces with concurrent acquisitions and how it is addressing parts quality issues. It is based on GAO’s April 2012 report on missile defense and its June 2011 report on space and missile defense parts quality problems. In fiscal year 2011, the Missile Defense Agency (MDA) experienced mixed results in executing its fiscal year 2011 development goals and tests. For the first time in 5 years, GAO was able to report that the agency delivered all of the targets used in fiscal year 2011 test events with the targets performing as expected. In addition, the Aegis Ballistic Missile Defense program’s Standard Missile-3 Block IA missile was able to intercept an intermediate-range target for the first time and the Terminal High Altitude Area Defense program successfully conducted its first operational flight test. However, none of the programs GAO assessed were able to fully accomplish their asset delivery and capability goals for the year. Flight test failures, a test anomaly, and delays disrupted MDA’s flight test plan and the acquisition strategies of several components. Flight test failures forced MDA to suspend or slow production of three out of four interceptors currently being manufactured. Some of the difficulties in MDA’s testing and production of assets can be attributed to its highly concurrent acquisition approach. Concurrency is broadly defined as the overlap between technology development and product development or between product development and production. High levels of concurrency were present in MDA’s initial efforts and are present in current efforts. For example, MDA’s flight test failures of a new variant of the Ground-based Midcourse Defense program’s interceptors while production was underway delayed delivery to the warfighter, increased costs, and will require retrofit of fielded equipment. Flight test costs to confirm its capability has increased from $236 million to about $1 billion. MDA has taken positive steps to incorporate some acquisition best practices, such as increasing competition and partnering with laboratories to build prototypes. For example, MDA took actions in fiscal year 2011 to reduce acquisition risks and prevent future cost growth in its Aegis SM-3 Block IIA program. Nevertheless, as long as newer programs adopt acquisition approaches with elevated levels of concurrency, there is still considerable risk of future performance shortfalls that will require retrofits, cost overruns, and schedule delays. MDA is also taking the initiative to address parts quality issues through various means, including internal policies, collaborative initiatives with other agencies, and contracting strategies to hold its contractors more accountable. Quality issues have seriously impeded to the development of the missile defenses in recent years. For example, during a fiscal year 2010 Terminal High Altitude Area Defense flight test, the air-launched target failed to initiate after it was dropped from the aircraft and fell into the ocean. A failure review board identified shortcomings in internal processes at the contractor to be the cause of the failure. This failure led to a delay of the planned test, restructuring of other planned tests, and hundreds of millions of dollars being spent to develop and acquire new medium-range air-launched targets. Parts quality issues will require sustained attention from both the executive and legislative branches. MDA is exhibiting some leadership, but there are significant barriers to addressing quality problems, such as the increase in counterfeit electronic parts, a declining government share of the overall electronic parts market, and workforce gaps within the aerospace sector. GAO makes no new recommendations in this statement. In the April 2012 report, GAO made recommendations to strengthen MDA’s longer-term acquisition prospects including a review of MDA’s acquisitions for concurrency to determine whether the proper balance has been struck between planned deployment dates and concurrency risks to achieve those dates. The report includes additional recommendations on how individual program elements can reduce concurrency. DOD agreed with six of the seven recommendations and partially agreed with one. DOD generally concurred with the recommendations in the June 2011 report for greater coordination between government organizations responsible for major space and missile defense programs on parts quality issues and periodic reporting to Congress.
Since 1987, State has recognized that the lack of adequate controls over visa processing is a material weakness that increases U.S. vulnerability to illegal immigration and diminishes the integrity of the U.S. visa. Specific problems have included (1) inadequate management controls, (2) lax security over visas, (3) unreliable equipment, and (4) unsupervised staff. State has acknowledged that it cannot eliminate all attempts to commit fraud, but it can make it more difficult for fraud to occur by improving the security features of the visa, expanding and improving automated systems, and strengthening staff supervision. State’s Inspector General’s 1993 investigation of the issuance of visas to an ineligible visa applicant, who was subsequently convicted of conspiracy to commit terrorist acts in the United States, highlighted the need for improved internal communications at the overseas posts. In an attempt to address this problem, State established embassy committees designed to promote closer cooperation with other agencies in identifying individuals ineligible for visas. Since 1990, State has reported that the passport process is a material weakness and vulnerable to fraud, including employee malfeasance. According to State, fraudulently obtained passports are being used to enter the country illegally and create false identities to facilitate criminal activities such as narcotics and weapons trafficking, smuggling children for use in pornography, and flight to avoid prosecution from criminal charges. In an attempt to address the problem, State is redeveloping and upgrading its systems to provide comprehensive accountability and improved internal controls. We visited nine overseas posts to ascertain the extent to which State has implemented controls over passport and visa operations: Canberra and Sydney, Australia; London, England; Guatemala City, Guatemala; Tokyo, Japan; Nairobi, Kenya; Seoul, Korea; Mexico City, Mexico; and Johannesburg, South Africa. In 1989, State began the machine-readable visa program as its primary initiative for eliminating fraudulent nonimmigrant visas. The machine-readable visa is considered a more secure document than its predecessor because the new visa is printed on synthetic material that is more secure than paper, is attached to the passport, and has a machine-readable zone with an encryption code. At the ports of entry, the Immigration and Naturalization Service and U.S. Customs Service can check names by scanning the machine-readable zone of the visa. The visas also include a digitized photograph of the traveler. State introduced the machine-readable visa system in 1989. The original due date for installation of the system was 1991, but installation was delayed for 15 months for additional review and analysis of the program. State set a new goal of 1995 to complete installation. However, State’s Inspector General reported that State had not received sufficient funds to meet this goal. In 1994, after the World Trade Center bombing, the Congress directed State to install automated lookout systems at all visa-issuing posts by October 30, 1995. State also made a commitment to install the machine-readable visa system at all visa-issuing posts by the end of fiscal year 1996. The Congress authorized State to retain $107.5 million through fiscal year 1995 in machine-readable visa processing fees to fund these and other improvements. As of December 1995, State had installed its machine-readable visa system at 200 posts, and all of the posts had automated access to the Consular Lookout and Support System (CLASS) either through direct telecommunications lines to the CLASS database in Beltsville, Maryland, or via the distributed name check (DNC) system, a stand-alone personal computer system with the CLASS database on tape or compact disk. By the end of fiscal year 1996, all posts are expected to have the machine-readable visa system, be on line with CLASS, and have the DNC as a backup, according to the Bureau of Consular Affairs. State will continue to upgrade the system’s software and hardware and pilot test a new version of the system. State spent a total of about $32 million on the installations in fiscal years 1994 and 1995 and plans to spend another $45 million through fiscal year 1998. Although most posts now have automated name-check capability and machine-readable visa systems, technical problems have limited their usefulness and availability. Posts often experience transmission problems with the telecommunications lines that support the system. U.S. embassies in Mexico City, Guatemala City, Sydney, Nairobi, and Seoul, which have direct access to CLASS, have experienced problems with the telecommunications lines and interruptions of CLASS. These disruptions have resulted in considerable delays in visa issuance and weakened visa controls. For example, during our visit to Mexico City we noted that consular staff were using the old microfiche system to check names during telecommunications disruptions rather than the DNC that was designed as backup. They used the microfiche system because using the DNC to check names was often a slow process. By using the microfiche system, the post ran the risk of approving a visa for an applicant who had been recently added to CLASS but had not yet been added to microfiche. State’s Diplomatic Telecommunications Service Program Office works with the international telecommunications carriers to find solutions where possible. However, according to an official of that office, if the problem is in the telecommunications lines of the host country, little can be done except to improve the post’s backup system. The Bureau of Consular Affairs has developed a new version of the software for the DNC to serve as a faster, more reliable backup when used with a new computer. The DNC software and new personal computers were sent to over 30 high-volume posts in 1995, according to a Bureau official. In the aftermath of the World Trade Center bombing, State directed all diplomatic and consular posts to form committees with representatives from consular, political, and other appropriate agencies to meet regularly to ensure that the names of suspected terrorists and others ineligible for a visa are identified and put into the lookout system. Of the nine posts we visited, all but Sydney and Johannesburg had terrorist lookout committees, and those two posts were represented by the lookout committees at their embassies in Canberra and Pretoria, respectively. Embassy officials at two of the nine posts we visited questioned the value of the committees, mainly because of the lack of cooperation from some agencies. Some agency representatives have been reluctant to provide to the consular sections the names of suspected terrorists, or others the U.S. government may want to keep out of the country, due to the sensitivity of the information and restrictions on sharing information. Officials from one of the law enforcement agencies contacted expressed concern that the information entered into CLASS could be traced to the originating agency and compromise its work. Only one of the agency officials we interviewed said that he had seen guidance from his agency on the extent to which this agency could share information. In addition, not all agencies are represented on these committees. For example, according to a consular official, the committee in Pretoria does not include representatives from the Federal Bureau of Investigation, the Customs Service, and the Drug Enforcement Agency. Consular officials have pointed out that the lookout committees are intended to augment rather than replace coordination activities at headquarters. Additionally, according to consular officials, they are (1) working closely with individual posts to resolve coordination problems, (2) maintaining close liaison with participating agencies at the headquarters level to ensure continued cooperation and commitment, and (3) soliciting increased participation from agencies whose contributions were limited in the past. State says that it has also taken steps to clarify terrorist reporting channels. The posts we visited did not routinely comply with State’s own internal control procedures. These procedures are described fully in the Department’s Management Control Handbook and summarized for consular officers in the Consular Management Handbook. One common shortcoming was the use of Foreign Service Nationals (FSN) to check names through CLASS without the direct supervision of a U.S. officer. Other shortcomings were the lack of security over controlled equipment and supplies and the failure to report and reconcile daily activities and follow cashiering procedures. According to the Consular Management Handbook, depending on the volume of visa fraud at a post, the embassy may assign the name check function to U.S. employees or assign a U.S. employee to monitor FSN staff doing name checks. Failure to check names could lead to issuance of visas to individuals who are ineligible. In June and July of 1993, the Inspector General testified that an individual convicted of conspiracy to commit terrorist acts in the United States was able to obtain a visa even after his name was added to the lookout system because consular staff failed to do the required name check. The Inspector General further testified that adequate controls were not in place to ensure that name checks were done. FSNs were responsible for checking names at five of the posts we visited. Of those posts, Johannesburg, Sydney, and Tokyo were not equipped with the machine-readable visa system. The consular officers at these posts relied on the FSNs to notify them when an applicant’s name matched one in the CLASS database. FSNs in Johannesburg were not required to annotate the visa applications to show that the applicants’ names had been checked. Thus, the consular officers lacked any assurance that the FSNs actually checked the names or advised the consular officers of all matches. Consular officers in Tokyo and Sydney said they periodically reviewed the visa applications and observed FSNs. One of the officials acknowledged that consular officers rely more heavily on FSNs than strict adherence to State Department guidance might suggest. However, the officials did not believe the reliance on FSNs was a problem because of the low risk of fraud at their posts. Installation of the machine-readable visa system should help rectify this situation. Unless an American officer overrides it, the system provides the results of the name check for the American officer’s review. Moreover, Bureau officials believe improved procedures and software enhancements to take effect on April 30, 1996, will make unsupervised name checks impossible. Consular officers will be required to certify in writing that they have checked the automated lookout system and that there is no basis for excluding the applicant. Three of the nine posts we visited demonstrated a lack of physical security over visa equipment and supplies. Without adequate controls, funds, equipment, and supplies can be misappropriated or misused. For example, during our fieldwork at the consulate in Johannesburg, access to the nonimmigrant visa processing area was not physically restricted, and personnel from other sections of the embassy were observed traversing the consular section to reach other parts of the embassy. In addition, the safe containing visa supplies was left unsecured on several occasions, and refused visa applications were not stored in a locked storage case as required. Two of the posts we visited reported problems with using required reports to reconcile their daily activities. State’s nonimmigrant visa reconciliation procedures require the posts to (1) maintain a log of visa numbers issued and spoiled, (2) inspect spoiled visas before entering them in the log, (3) ensure that each application was approved by an authorized officer, and (4) verify that each number in the visa number series is accounted for. The failure to follow these procedures provide obvious opportunities for fraud. Consular officials in Seoul said they could not use the reports generated by the nonimmigrant visa processing system to reconcile the number of visas issued to the number of used foils. The consular officials believed this was because the system was designed for posts that accept, adjudicate, and issue visas on the same day, and posts as large as Seoul could not produce visas in one day. As a result, they said that they had developed their own system of accounting for visa foils. We also observed reconciliation problems in Sydney. Three of the posts we visited also failed to comply with established cashiering procedures such as reconciling services rendered with collections received. Routine reconciliations are an essential tool in detecting employee malfeasance. In Nairobi, neither the accountable officer nor the budget and fiscal officer reconciled collections with services. They said they were unaware of the requirement. In Johannesburg, the accountable officer was reconciling fees collected with services rendered, but was not conducting periodic unannounced cash audits as required in the Consular Management Handbook. The accountable officer for passport operations at the U.S. Embassy in Mexico City also had not conducted periodic cash audits. Automation upgrades and enhancements are the cornerstone of State’s strategy to reduce the vulnerability of passport systems to fraud. Planned efforts involve (1) installing a computer network to connect all domestic passport agencies and serve as a platform to allow State to verify the multiple issuance of passports, (2) enhancing its travel document issuance system so that the passport photo can be printed digitally, and (3) completing the upgrade of its travel document issuance system at all passport agencies. State had planned to have most of the improvements completed by December 1995. However, only one major improvement, installation of a wide-area network, had been completed by that date. The other improvements, in addition to being dependent on the wide-area network for telecommunications, are also dependent on the completion of the upgrades to the passport production system. State’s current goal is for full completion of these enhancements and upgrades by the end of 1996. State indicated that completion of these upgrades was dependent upon the availability of funds. State installed the wide-area network to connect the passport agencies with each other as the telecommunications platform for the photo digitization and the multiple issuance verification initiatives. The Multiple Issuance Verification system is expected to allow Passport Office employees to detect individuals applying at more that one office for multiple passports using the same identity—which State describes as one of the most prevalent forms of passport fraud. Without such a system, there is no way for one office to know before issuance what applications are being processed by any other office. State is also developing a system to print a digitized passport photograph. According to State, a digitized photograph will make it easier to detect a substitution—another prevalent form of passport fraud. State spent about $4.1 million for these improvements in fiscal year 1995 and plans to spend an additional $22 million through fiscal year 1998. State is using revenues from the machine-readable visa processing fees to fund these improvements. State has not completed the upgrade from the 1980 to the 1990 version of its Travel Document Issuance System, which is used to enter data, process, and track the actual production of passports. Systems in 9 of the 14 passport facilities have been upgraded. According to the Consular Bureau, the upgrade replaces an outdated minicomputer-based system with a more modern personal computer-based system, providing the interface needed to take advantage of the wide-area network and other new technologies. The conversion costs about $700,000 to $800,000 per office. Because of the high cost of the upgrade, the conversion had been proceeding at the rate of one passport agency per year. The Bureau used appropriated funds. Conversion from the 1980 version to the 1990 version of the system is a prerequisite to implementing photo digitization and the Multiple Issuance Verification System. Therefore, the Consular Bureau plans to use machine-readable visa funds to pay for the conversion of the remaining five passport facilities. At those offices, the upgrades will be coupled with the installation of the photo digitization and the multiple issuance enhancements, which the Bureau believes will reduce costs. According to a Bureau official, depending on the availability of the funds, the Bureau plans to have all systems upgraded and enhanced by the end of calendar year 1996. However, the Bureau official acknowledged that this was an ambitious goal. He said variables such as the outcome of systems tests and the possibility that three of the passport offices may move could result in delays. Table 1 shows selected activities and corresponding milestone dates. In commenting orally on a draft of this report, State Department officials generally agreed with the report’s presentation; however, they asserted that many of the generic problems listed in the report are the result of inadequate staffing and resources. They also noted that some points needed clarification or correction. We have incorporated these changes where appropriate. We conducted our review in Washington, D.C.; Canberra and Sydney, Australia; London, England; Guatemala City, Guatemala; Tokyo, Japan; Nairobi, Kenya; Seoul, Korea; Mexico City, Mexico; and Johannesburg, South Africa. We selected these posts to obtain a cross-section of large and small posts, posts with the machine-readable system, posts with the old visa-issuing system, and posts undergoing changes in their consular workloads. We obtained past State Department Inspector General reports, annual Financial Management Integrity Act reports, and other documents describing visa and passport operations; reviewed agency plans for correcting the previously identified weaknesses; and discussed the status of the corrections with Bureau of Consular Affairs officials. We observed operations at the Washington Passport Agency in Washington, D.C., and at the overseas posts we visited we observed visa and passport operations, examined passport and visa applications, and tested selected internal control procedures. We conducted our review intermittently from May 1994 to March 1996 in accordance with generally accepted government auditing standards. Copies of the report are being sent to the Secretary of State, the Director of the Office of Management and Budget, and interested congressional committees. We will also provide copies to others upon request. Please contact me at (202) 512-4128 if you or your staff have any questions concerning this report. Other major contributors are listed in appendix I. Diana M. Glod Jose M. Pena, III Michael D. Rohrback Cherie M. Starck La Verne G. Tharpes Steven K. Westley Michael C. Zola The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO reviewed the Department of State's plan to make its visa and passport operations more efficient and less vulnerable to fraud, focusing on: (1) the status of the plan's key initiatives; and (2) compliance with internal management controls by consular staff at selected posts overseas. GAO found that: (1) State's efforts to overcome the material weaknesses in visa and passport processing have had mixed results; (2) after initial delays, State has made steady progress in installing its machine-readable system (the primary initiative for eliminating visa fraud) and provided all visa-issuing posts with automated access to its global database containing names of individuals ineligible for a visa; (3) operational problems have diminished the effectiveness of these efforts including technical problems that have limited the availability and usefulness of the visa improvements, limited usefulness of embassy lookout committees because of the reluctance of some agencies to share information and the lack of representation of key agencies, and lack of compliance with management control procedures designed to decrease the vulnerability of consular operations to fraud; (4) State is behind schedule in its modernization and enhancement efforts to reduce passport fraud; (5) State originally planned to have installed a new wide-area network, developed a system to print a digitized passport photograph, and completed installation of a system to verify multiple issuance of passports by December 1995, however, only the installation of the wide-area network, upon which the other two projects depend, has been completed; (6) full implementation also depends on the completion of the modernization of the passport production system which State indicates is dependent on the availability of funding; and (7) State's current goal is for full implementation by the end of calendar year 1996.
All levels of government share responsibility in the overall U.S. election system. At the federal level, Congress has authority under the Constitution to regulate presidential and congressional elections and to enforce prohibitions against specific discriminatory practices in all federal, state, and local elections. Congress has passed legislation that addresses voter registration, absentee voting, accessibility provisions for the elderly and persons with disabilities, and prohibitions against discriminatory practices. At the state level, individual states are responsible for the administration of both federal elections and their own elections. States regulate the election process, including, for example, the adoption of voluntary voting system guidelines, the state certification and acceptance testing of voting systems, ballot access, registration procedures, absentee voting requirements, the establishment of voting places, the provision of election day workers, and the counting and certification of the vote. In total, the overall U.S. election system can be seen as an assemblage of 55 distinct election systems—those of the 50 states, 4 U.S. territories, and the District of Columbia. Further, although election policy and procedures are legislated primarily at the state level, states typically have decentralized election systems, so that the details of administering elections are carried out at the city or county levels, and voting is done at the local level. As we reported in 2001, local election jurisdictions number more than 10,000, and their sizes vary enormously—from a rural county with about 200 voters to a large urban county, such as Los Angeles County, where the total number of registered voters for the 2000 elections exceeded the registered voter totals in 41 states. Further, these thousands of jurisdictions rely on many different types of voting methods that employ a wide range of voting system makes, models, and versions. Because of the prominent role played by electronic voting systems, testing these systems against national standards is critical to ensuring their security and reliability. Equally critical is ensuring that the laboratories that perform these tests are competent to carry out testing activities. In the United States today, most votes are cast and counted by electronic voting systems, and many states require use of systems that have been certified nationally or by state authorities. However, voting systems are but one facet of a multifaceted, continuous overall election system that involves the interplay of people, processes, and technology during the entire life of a system. All levels of government, as well as commercial voting system manufacturers and system testing laboratories, play key roles in ensuring that voting systems perform as intended. Electronic voting systems are typically developed by manufacturers, then purchased as commercial, off-the-shelf products and operated by state and local election administrators. Viewed at a high level, these activities make up three phases of a system life cycle: product development, acquisition, and operations. (See fig. 1.) Key processes that span these life cycle phases include managing the people, processes, and technologies within each phase and across phases, and testing the systems and components during and at the end of each phase. Additionally, voting system standards are important through all of the phases because they provide criteria for developing, testing, and acquiring voting systems, and they specify the necessary documentation for operating the systems. The product development phase includes activities such as establishing requirements for the system, designing a system architecture, developing software, and integrating components. Activities in this phase are performed by the system vendor. The acquisition phase includes activities such as publishing a solicitation, evaluating offers, choosing a voting technology and a vendor, and awarding and administering contracts. For voting systems, activities in this phase are primarily the responsibility of state and local governments but entail some responsibilities that are shared with the system vendor (e.g., entering into the contract). The operations phase consists of activities such as ballot design and programming, setup of systems before voting, pre-election testing, vote capture and counting during elections, recounts and system audits after elections, and storage of systems between elections. Responsibility for activities in this phase typically resides with local jurisdictions, whose officials may, in turn, rely on or obtain assistance from system vendors for aspects of these activities. Standards for voting systems, as will be discussed in a later section, were developed at the national level by the Federal Election Commission in 1990 and 2002 and were updated by EAC in 2005. In the product development phase, voting system standards serve as requirements to meet for developers to build systems. In the acquisition phase, they also provide a framework that state and local governments can use to evaluate systems. In the operations phase, they specify the necessary documentation for operating the systems. Testing processes are conducted throughout the life cycle of a voting system. Voting system vendors conduct product testing during development of the system and its components. Federal certification testing of products submitted by system vendors is conducted by national voting system testing laboratories (VSTL). States may conduct evaluation testing before acquiring a system to determine how well products meet their state-specific specifications, or they may conduct certification testing to ensure that a system performs its functions as specified by state laws and requirements. Once a voting system is delivered by the system vendor, states and local jurisdictions may conduct acceptance testing to ensure that the system satisfies functional requirements. Finally, local jurisdictions typically conduct logic and accuracy tests related to each election and sometimes subject portions of the system to parallel testing during each election to ensure that the system components perform accurately. Management processes ensure that each life cycle phase produces a desirable outcome. Typical management activities that span the system life cycle include planning, configuration management, system performance review and evaluation, problem tracking and correction, human capital management, and user training. These activities are conducted by the responsible parties in each life cycle phase. In 2004, we reported that the performance of electronic voting systems, like any type of automated information system, can be judged on several bases, including their security, accuracy, ease of use, efficiency, and cost. We also reported that voting system performance depends on how the system was designed, developed, and implemented. Since the passage of HAVA, the use of electronic voting systems has increased and become the predominant method of voting. However, concerns have been raised about the security and reliability of these systems. As we have previously reported, testing and certifying voting systems is one critical step in acquiring, deploying, operating, and administering voting systems, which better ensures that they perform securely and reliably. Among other things, rigorous execution and careful documentation of system testing is a proven way to help ensure that system problems are found before the systems are deployed and used in an election. To accomplish this, it is vital that the organizations that test the systems be qualified and competent to do so. For voting systems, a key testing organization is a federally accredited, national VSTL. In general, accreditation is the formal recognition that a laboratory is competent to carry out specific types of tests or calibrations. Federally accredited laboratories perform many different types of testing and related activities on various products, ranging from inspecting grain to certifying maritime cargo gear. The genesis of laboratory accreditation programs owes largely to agencies’ need to assure themselves of the competency of the organizations responsible for testing products or services that involve the use of federal funds. To provide national recognition for competent laboratories, the NIST Director established the National Voluntary Laboratory Accreditation Program (NVLAP) in 1976 at the request of the private sector. Under this program, which is based on internationally accepted standards, NIST accredits laboratories that it finds competent to perform specific types of tests or calibrations. In June 2004, NVLAP announced the establishment, in accordance with HAVA, of an accreditation program for laboratories that test voting systems using standards determined by EAC. Enacted in October 2002, HAVA affected nearly every aspect of the voting process, from voting technology to provisional ballots and from voter registration to poll worker training. In particular, the act authorized $3.86 billion in funding over several fiscal years to replace punch card and mechanical lever voting equipment, improve election administration and accessibility, train poll workers, and perform research and pilot studies. HAVA also established EAC, provided for the appointment of four commissioners, and specified the process for selecting an executive director. Generally speaking, EAC is to assist in the administration of federal elections and provide assistance in administering certain federal election laws and programs. Since the passage of HAVA in 2002, the federal government has taken steps to implement the act’s provisions. For example, after beginning operations in January 2004, EAC updated the existing federal voluntary standards for voting systems, including strengthening provisions related to security and reliability. Additionally, EAC established an interim VSTL accreditation program that leveraged a predecessor program run by the National Association of State Elections Directors, and EAC and NIST then established companion accreditation programs that replaced the interim program. Federal standards for voting systems were first issued in 1990 when the Federal Election Commission published standards. These federal standards identified minimum functional and performance requirements, which states were free to adopt in whole, in part, or not at all, for electronic voting equipment, and specified test procedures to ensure that the equipment met those requirements. In 2002, the Federal Election Commission issued its Voting System Standards (VSS), which updated the 1990 standards to reflect more modern voting system technologies. In 2005, we reported that these standards identified minimum functional and performance requirements for voting systems but were not sufficient to ensure secure and reliable voting systems. As a result, we recommended that EAC work to define specific tasks, measurable outcomes, milestones, and resource needs to improve the voting system standards. Until then, election administrators were at risk of relying on voting systems that were not developed, acquired, tested, operated, or managed in accordance with rigorous security and reliability standards— potentially affecting the reliability of future elections and voter confidence in the accuracy of the vote count. Following the enactment of HAVA in 2002 and the establishment of EAC in 2004, EAC adopted the Voluntary Voting System Guidelines (VVSG) in 2005. The VVSG specify the functional requirements, performance characteristics, documentation requirements, and test evaluation criteria for the national certification of voting systems. Accredited testing laboratories are to use the VVSG to develop test plans and procedures for the analysis and testing of systems in support of EAC’s voting system certification program. The VVSG are also used by voting system manufacturers as the basis for designing and deploying systems that can be federally certified. We reported in 2001 that the National Association of State Elections Directors was accrediting independent test authorities to test voting equipment against the Federal Election Commission standards. Under this program, three laboratories were accredited. Under HAVA, NIST is to recommend laboratories for EAC accreditation. In 2006, NIST notified EAC that its initial recommendations might not be available until sometime in 2007. As a result, EAC initiated an interim accreditation program and invited the three laboratories accredited by the state elections directors to apply. As part of the interim program, laboratories were required to attest to a set of EAC-required conditions and practices, including certifying the integrity of personnel, the absence of conflicts of interest, and the financial stability of the laboratory. In August and September 2006, EAC granted interim accreditation to two of the three laboratories invited to apply. EAC terminated its interim program in March 2007. HAVA assigned responsibilities for laboratory accreditation to both EAC and NIST. In general, to reach an accreditation decision, NIST is to focus on assessing laboratory technical qualifications, while EAC is to use those assessment results and recommendations and augment them with its own review of related laboratory capabilities. See table 1 for the two agencies’ HAVA responsibilities. The tasks that NIST is to perform addressed in an annual interagency agreement executed between the institute and EAC each year. For example, the 2008 interagency agreem states that NVLAP will continue to assess VSTLs and will coordinate with EAC to continually monitor and review the performance of the laboratories. Additionally, the agreement states that the two agencies will coordinate to maintain continuity between their respective accreditation programs. in meeting HAVA’s requirements are The NIST and EAC accreditation programs can be viewed together as forming a federal VSTL accreditation process that consists of a series o complementary steps. These steps are depicted in figure 2, where the numbers correspond to a detailed narrative description below. As of May 2008, EAC has accredited four laboratories. These laboratories are SysTest Labs, LLC; Wyle Laboratories, Inc.; iBeta Quality Assurance; and InfoGard Laboratories, Inc. A fifth laboratory, CIBER Inc., has been granted NVLAP accreditation and has been recommended to, but not yet accredited by, EAC. InfoGard Laboratories, Inc., whose NVLAP accreditation expires in June 2008, has recently notified NIST and EAC that it would not apply to renew its accreditation, citing the volatility of the voting system environment as one reason. The timeline for each of these accreditations, and other accreditation program activities, is found in figure 3. NIST’s defined approach to accrediting voting system laboratories largely reflects applicable HAVA requirements and relevant international standards, both of which are necessary to an effective program. However, this approach is continuing to evolve based on issues realized during NIST’s implementation experience to date. In particular, because NIST’s defined program does not, for example, specify the nature and extent of assessment documentation to generate or retain or specify the version of the voting system standards to be used, our analysis of NIST’s efforts in accrediting four laboratories could not confirm that the agency has consistently followed its defined accreditation program. NIST officials stated that these limitations are due in part to the relative newness of the program and that they will be addressed by updating the accreditation program handbook. However, they said that they do not have documented plans to accomplish this. Until these limitations are addressed, NIST will be challenged in accrediting voting system laboratories in a consistent and verifiable manner. NIST has defined its voting system accreditation program to address relevant HAVA requirements. According to HAVA, NIST is to conduct reviews of independent, nonfederal voting system testing laboratories and submit to EAC a list of proposed voting system testing laboratories and monitor and review the performance of those proposed laboratories that EAC accredits, including making recommendations to EAC regarding accreditation continuance and revocation. NIST’s defined voting system accreditation program satisfies both of these requirements. With respect to the first, NIST announced in June 2004 the establishment of its voting system testing laboratory accreditation program as part of NVLAP, a statutorily created program for unbiased, third parties to establish the competence of national independent laboratories. As such, NIST adopted its NVLAP handbook as the basis for its defined approach to reviewing VSTLs and has supplemented it with a handbook that is specific to voting system testing. With respect to the second HAVA requirement, the supplemental handbook cited above states that the NIST Director will recommend NVLAP-accredited VSTLs to EAC for subsequent commission accreditation. Additionally, NIST’s handbooks provide for both monitoring accredited laboratories and for making recommendations regarding a laboratory’s continued accreditation. For example, the handbook states that a monitoring visit may occur at both scheduled and unscheduled times and the scope may be limited to a few items or include a full review. It also states that a reaccreditation review shall be conducted in accordance with the procedures used to initially accredit laboratories. Further, the handbook also identifies accreditation or reaccreditation decision options, including granting, denying, or modifying the scope of an accreditation. According to NIST officials, these HAVA requirements are relevant and important to defining an effective voting system testing laboratory accreditation program. By incorporating them, NIST has reflected one key aspect of an effectively defined program. NIST’s VSTL accreditation program reflects internationally recognized standards for establishing and conducting accreditation activities. These standards are published by the International Organization for Standardization (ISO), and the two that are germane to this accreditation program are (1) ISO/IEC 17011, which establishes general requirements for accreditation bodies and (2) ISO/IEC 17025, which establishes the general requirements for reviewing the competence of laboratories. According to NIST program documentation, this allows NVLAP to both operate as an unbiased, third party accreditation body and to utilize a quality management system compliant with international standards. As a result, NIST has incorporated key aspects of an effective accreditation body into its voting system accreditation program. ISO/IEC 17011 requires that an accrediting body have, among other things, (1) a management system for accreditation activities, (2) a policy defining the types of records to be retained and how those records will be maintained, (3) a clear description of the accreditation process that covers the rights and responsibilities of those seeking accreditation, and (4) a clear description of the accreditation activities to be performed. NIST VSTL accreditation program-related documentation, including its program handbooks, satisfies each of these requirements. In fact, NIST has cross-referenced its documentation to each ISO/IEC 17011 requirement. Specifically, the first requirement is cross-referenced to the NVLAP Management System Manual, which describes the overall accreditation program’s management policies and control structure, and the second is cross-referenced to the program’s record keeping policy, which specifies what types of records should be maintained and how they should be maintained. The third and fourth requirements are cross-referenced to the accreditation process descriptions in both the Management System Manual and the general handbook. Together, these documents contain, for example, (1) the rights of laboratories applying for accreditation and (2) the scope of accreditation activities to be performed, including a preassessment review, an on-site review, and a final on-site assessment report. ISO/IEC 17025 requires that accreditation reviews cover specific topics. These include (1) laboratory personnel independence and conflicts of interest; (2) a laboratory system for quality control (i.e., a framework for producing reliable results and continuous improvement to laboratory procedures); and (3) a laboratory mechanism for collecting and responding to customer complaints. Additionally, the standard establishes basic technical requirements that a laboratory has to meet, and thus that reviews are to cover, including (1) competent laboratory personnel who are capable of executing the planned tests, (2) appropriate tests and test methods, and (3) clear and accurate test result documentation. NIST voting system testing laboratory accreditation program-related documents, including its program handbooks, satisfy these requirements. First, the general handbook defines the requirement for a laboratory to have personnel that are independent and free of any conflict of interest. Second, the handbook requires that a laboratory have a management quality control system and that this system provide for reliable results and continuous improvement to laboratory procedures. Third, the handbook requires that a laboratory have a mechanism for receiving and responding to customer complaints. Last, the handbook establishes certain technical requirements that a laboratory must meet, such as having competent laboratory personnel capable of executing the planned tests, using appropriate tests and test methods, and documenting test results in a clear and accurate manner. For several of these requirements, NIST’s voting-specific supplemental handbook augments the general handbook. For example, this supplemental handbook requires laboratories to submit a quality control manual, as well as information to demonstrate the competence of laboratory administrative and technical staff. Further, it requires that a laboratory’s training program be updated so that staff can be retrained as new versions of voting system standards are issued. NIST has reported on the importance of ensuring that those persons who perform accreditation assessments are sufficiently qualified and that the assessments themselves are based on explicitly defined criteria and are adequately documented. Nevertheless, NIST has not fully reflected key aspects of these findings in its defined approach to accrediting voting system testing laboratories. For example, it has not specified the basis for determining the qualifications of its accreditation assessors, and while a draft update to its handbook now includes the specific voting system standards to be used when performing an accreditation assessment, this handbook was only recently approved. According to NIST officials, these gaps are due to the newness of the accreditation program and will be addressed in the near future. Because these gaps have confused laboratories as to what standards they were to meet, and may have resulted in differences in how accreditations have been performed to date, it is important that the gaps be addressed. NIST has reported on the importance of having competent and qualified human resources to support accreditation programs. According to these findings, an accreditation program should, among other things, provide for having experienced and qualified assessors to perform accreditation demonstrating an assessors’ qualifications using defined documentation and explicit criteria that encompass the person’s education, experience, and training; and training (initial and continuing) for assessors. NIST’s defined approach to VSTL accreditation does not provide for all these requirements. To its credit, its program handbook identifies the need for experienced and qualified assessors in the execution of accreditation activities and provides for each assessor’s qualifications to be documented. Further, it has defined generic training that applies to all of its accreditation assessors. For example, the NVLAP Assessor Training Syllabus includes training on ISO/IEC 17011 and 17025, as well as training on the NVLAP general handbook. In addition, the VSTL accreditation program manager stated that new assessors receive training on the 2002 VSS and 2005 VVSG and that periodic training seminars are provided to assessors on changes to either the general handbook or the 2005 VVSG. In addition, the program manager told us that candidate assessors must submit some form of documentation (e.g., a resume), and that this documentation is used to evaluate, rank, and select candidates that are best qualified. The NIST VSTL assessors that we interviewed confirmed that they were required to submit such documentation at NIST’s request. However, NIST’s defined approach does not cite the explicit capabilities and qualifications that an assessor must meet or the associated documentation needed to demonstrate these capabilities and qualifications. According to the program manager, this is because the field of potential assessors in the voting system arena is small and specialized and because they focused on defining other aspects of the program that were higher priorities. Further, NIST has not defined and documented the specific training requirements needed to be a VSTL lead assessor or a technical assessor for the VSTL program. According to the program manager, this is because these assessors receive all the training they need by working on the job with more experienced assessors. Not specifying criteria governing assessor qualifications and training is of concern because differences in assessors’ capabilities could cause inconsistencies in how assessments are performed. NIST recognizes the importance of specifying explicit criteria against which all candidate laboratories will be assessed and fully documenting the assessments that are performed. Specifically, the general handbook provides the criteria and requirements that will be used to evaluate basic laboratory capabilities. It also states that technical requirements specific to a given field of accreditation are published in program-specific handbooks. To that end, NIST published a supplemental program-specific handbook in December 2005 that provided the voting-specific requirements to be used to evaluate VSTLs, additional guidance, and related interpretive information. NIST’s 2005 supplemental handbook does not contain sufficient criteria against which to evaluate VSTLs. It identifies specific requirements that laboratories are to demonstrate relative to the 2002 VSS but not the 2005 VVSG. For example, the handbook states that laboratories are expected to develop, validate, and document test methods that meet the 2002 VSS. However, it does not refer to the 2005 VVSG. In addition, the program- specific checklist that accompanies this version of the handbook does not identify all the 2005 VVSG standards against which laboratories are evaluated. Specifically, this checklist makes reference to the VVSG in relation to just a few checklist requirements. According to the NIST program manager, the 2005 handbook did not refer to the 2005 VVSG requirements because only the 2002 VSS requirements were mandatory at the time it was published. He further stated that, despite the fact that the 2005 VVSG requirements were not included in that handbook, NIST assessors were expected to use them when performing the first laboratory assessments. Representatives for two laboratories stated that because these requirements were not documented or identified in the NIST handbooks, they did not learn that they would be required to demonstrate 2005 VVSG-based capabilities until the NIST on-site assessment teams arrived. In December 2007, NIST released draft revisions of the voting program- specific handbook and checklist, stating that labs are expected to meet both 2002 VSS and 2005 VVSG. In addition, the 2007 draft handbook clearly specifies that laboratories must demonstrate how developed test methods and planned tests trace back to and satisfy both the 2002 VSS and the 2005 VVSG. Taken together, the new handbook and checklist should better identify the requirements and criteria used to evaluate a laboratory and document the results. According to NIST, the new handbook and checklist have recently been finalized, and both are now in use. NIST has found that reliable and accurate documentation provides assurance that laboratory accreditation activities have been effectively fulfilled. However, in its efforts to date in accrediting four VSTLs, documentation of the assessments does not show that NIST has fully followed its defined accreditation approach. While we could not determine whether this is due to incomplete documentation of the steps performed and the decisions made during an assessment or due to steps not being performed as defined, this absence of verifiable evidence raises questions about the consistency of the assessments and the resultant accreditations. Without adequately documenting each assessment, including all steps performed and the basis for any steps not performed, such questions may continue to be raised. To NIST’s credit, available documentation shows that it consistently followed some aspects of its defined approach in accrediting the four laboratories. For example, we verified that NIST received an application from each of the laboratories as required, and our review of completed checklists and summary reports shows that preassessment reviews and on-site assessments were performed for each laboratory, as was required. According to a lead assessor, this review usually focused on the laboratories’ quality assurance manuals. Moreover, the completed checklists identified whether the requirement was met or not for each listed requirement, and included comments, in some cases, as to how a laboratory addressed a requirement. Also as required, NIST received laboratory responses describing how unmet requirements were addressed within specified time frames, used the responses in making accreditation decisions, and notified EAC of its decisions via letters of recommendation. Furthermore, NIST has recently begun reaccreditation reviews at two laboratories, as required. However, documentation does not show that NIST has consistently followed other aspects of its defined approach. Our analysis of the checklists that are to be used to both guide and document a given assessment, including identifying unmet requirements and capturing assessor comments and observations, shows some differences. For example: One type of checklist (the supplemental handbook checklist) was prepared for only two of the four laboratory assessments. According to the program manager, this is because even though a draft revision of this checklist was actually used to assess the other two laboratories, the assessment results were recorded on a different checklist (the general handbook checklist). While this is indicated on one of the two checklists, it is not indicated on the other. On the checklist used for one laboratory, an assessor marked several sections as “TA” with no explanation as to what this means. Also, the checklist used for another laboratory did not identify whether most of the requirements were met or not met. Further, the checklist for a third laboratory had one section marked as “not applicable” but included no explanation as to why that section did not apply, while the checklist for a different laboratory marked the same section as “not applicable” but included a reason for doing so. Notwithstanding these differences, the program manager told us that each laboratory was assessed using the same requirements and all assessments to date were performed in a consistent manner. On the basis of available documentation, however, we could not verify that this is the case. As a result, it is not clear that NIST has consistently followed its defined approach. Available documentation also does not show that NIST followed other aspects of its approach. For example: The program handbook states that each laboratory is to identify the requested scope of accreditation in its application package. However, our analysis of the four application packages shows that two laboratories did not specify a requested scope of accreditation. According to the program manager, the scope of accreditation for all laboratories was the 2002 VSS and 2005 VVSG because, even though the latter standards were not yet in effect at the time, they were anticipated to be in effect in the near future. However, NIST did not have documentation that notified the laboratories of this scope of accreditation or that indicated whether this scope was established by EAC, NIST, or the laboratories. The program handbook states that after receiving a laboratory’s application package, NIST will acknowledge its receipt in writing and will inform the laboratory of the next steps in the accreditation process. However, NIST did not have documentation demonstrating that this was done. According to the program manager, this was handled via telephone conversations. However, representatives for several laboratories noted that these calls did not clearly establish expectations, adding that some expectations were not communicated until the NIST team assessors arrived to conduct the on-site assessment. The program manager stated that these deviations from the defined approach are attributable to the relative newness of the program, but despite these discrepancies, each laboratory was assessed consistently. However, we could not verify this, and thus it is not clear that NIST has consistently followed its defined approach. According to this official, future versions of the program handbook would address these limitations. However, documented plans for doing so have not been developed. EAC has recently defined its voting system laboratory accreditation approach in a draft program manual. However, this draft manual omits important content. While addressing relevant HAVA requirements, the draft manual does not adequately define key accreditation factors that NIST has identified, and a key accreditation feature that we have previously reported as being integral to an effective accreditation program. Moreover, not all factors and features that the draft manual does include have been defined to a level that would ensure thorough, consistent, and verifiable implementation. Because this manual was not available for EAC to use on the four laboratory accreditations that it has completed, the accreditations were performed using a largely undocumented series of steps. As a result, the thoroughness and consistency of these accreditations is not clear. According to EAC officials, these gaps are due to the agency’s limited resources being focused on other issues, and will be addressed as its accreditation program evolves. However, they said that they do not yet have documented plans to accomplish this. Until EAC fully defines a repeatable VSTL accreditation approach, it will be challenged in its ability to treat all laboratories consistently and produce verifiable results. In February 2008, EAC issued a draft version of a VSTL accreditation program manual for public comment. According to HAVA, EAC’s accreditation program is to meet certain requirements. Specifically, it is to provide for voting system hardware and software testing, certification, decertification, and recertification by accredited laboratories. Additionally, it is to base laboratory accreditation decisions, including decisions to revoke an accreditation, on a vote of the commissioners, and it is to provide for a published explanation of any commission decision to accredit any laboratory that was not first recommended for accreditation by NIST. To EAC’s credit, its draft accreditation program manual addresses each of these requirements. First, the manual defines the role that the laboratories are to play relative to voting system testing, certification, recertification and decertification, and it incorporates by reference an EAC companion voting system certification manual that defines requirements and process steps for voting system testing and certification-related activities. With respect to the remaining three HAVA requirements, the draft EAC accreditation manual also requires (1) that the commissioners vote on the accreditation of laboratories recommended by NIST for accreditation, (2) that EAC publish an explanation for the accreditation of any laboratory not recommended by NIST for accreditation, and (3) that the commissioners vote on the proposed revocation of a laboratory’s accreditation. According to EAC officials, its draft approach incorporates HAVA requirements because the commission is focused on meeting its legal obligations in all aspects of its operations, including VSTL accreditation. In doing so, EAC has addressed one important aspect of having an effective accreditation program. Beyond addressing relevant HAVA requirements, EAC’s draft accreditation manual defines an accreditation process, including program phases, requirements, and certain evaluation criteria. However, it does not do so in a manner that fully satisfies factors that NIST has reported can affect the effectiveness of accreditation programs. Moreover, it does not adequately address a set of features that our research shows are common to federal accreditation programs and that can influence a program’s effectiveness. According to EAC officials, these factors and features are not fully addressed in the draft program manual because its accreditation program is still in its early stages of development and is still evolving. Until they are fully addressed, EAC’s accreditation program’s effectiveness will be limited. According to NIST, having confidence in and ensuring appropriate use of an accredited testing laboratory requires that accreditation stakeholders have an adequate understanding of the accreditation process, scope, and related criteria. NIST further reports that confidence in the accreditation process can be traced to a number of factors that will influence the thoroughness and competence of accreditation programs, and thus these factors can be viewed as essential accreditation program characteristics. They include having published procedures governing how the accreditation program is to be executed, such as procedures for granting, maintaining, modifying, suspending, and withdrawing accreditation; specific instructions, steps, and criteria for those who conduct an accreditation assessment (assessors) to follow, such as a test methodology that is acceptable to the accreditation program; knowledgeable and experienced assessors to execute the instructions and steps and apply the related criteria; and complete records on the data collected, results found, and reports prepared relative to each assessment performed. EAC’s draft accreditation program manual addresses one of these factors but it does not fully address the other three. (See table 2.) For example, while the manual requires that EAC maintain records, it only addresses the retention of records associated with the testing of voting systems and not those associated with the accreditation of laboratories. EAC officials told us that testing records are meant to include accreditation records, although they added that this is not explicit in the manual and needs to be clarified. Further, the manual is silent on the steps to be followed and criteria to be applied in reviewing a laboratory’s application and the qualifications required for accreditation reviewers. By not fully addressing these factors, EAC increases the risk that its accreditation reviews will not be performed consistently and comprehensively. As we have previously reported, the nature and focus of federal programs for accrediting laboratories vary, but nevertheless include certain common features. In particular, these programs require laboratories to provide certain information to the accrediting body, and they provide for evaluation of this information by the accrediting body in making an accreditation determination. As we reported, the required information is to include, among other things, the laboratory’s (1) organizational information, (2) records and record-keeping policy, (3) test methods and procedures, (4) conflict of interest policy, and (5) financial stability. To its credit, EAC’s draft accreditation manual provides for laboratories to submit information relative to each of these features that are common to federal accreditation programs. For example, it provides for laboratories to submit organizational information, such as location(s), ownership, and organizational chart; a written policy for maintaining accreditation-related records for 5 years; conflict of interest policies and procedures; test- related polices and procedures, as well as system-specific test plans; and financial information needed to demonstrate stability. Moreover, for four of the five features, the manual identifies the specific types of information needed for accreditation and how the information is to be evaluated, including the criteria that are to be used in evaluating it. However, for the financial stability feature, the manual does not describe what specific documents are required from the laboratory to satisfy this requirement, nor does the manual indicate how information provided by a laboratory will be evaluated. At the time of our review, EAC’s Director of Voting System Testing and Certification told us that the draft accreditation manual was to be submitted for approval and that this draft did not address all of the limitations cited above. For example, it would not contain the information needed and the evaluation approach and criteria to be used in making determinations about financial stability because this decision is to be based on what the director referred to as a “reasonableness” test that involves EAC evaluation of the information relative to that provided by other laboratories. Further, while EAC officials said that they plan to evolve their approach to VSTL accreditation and to address these gaps, EAC does not have documented plans for accomplishing this. Without clearly defining information to be used and how it is to be used, EAC increases the risk that financial stability determinations will not be consistently and thoroughly made. As of May 2008, EAC has accredited four laboratories, but the documentation associated with each of these accreditations is not sufficient to recreate a meaningful understanding of how each evaluation was performed and how decisions were made, and thus, the bases for each accreditation were not clear. Specifically, each of the accreditations occurred before EAC had defined its approach for conducting them. Because of this, EAC performed each one using a broadly defined process outlined in a letter to each laboratory and an associated checklist that only indicated whether certain documents were received. Our analysis of these letters showed that the correspondence sent to each laboratory was all the same, identifying three basic review steps to be performed and citing a list of documents that the laboratories were to provide as part of their applications. However, the letters did not describe in any manner how EAC would review the submitted material, including the criteria to be used. According to EAC officials, the review steps were not documented. Instead, they were derived by a single reviewer using (1) the applications and accompanying documents submitted by the laboratories, (2) familiarity with the materials used by the state election directors- sponsored accreditation program, and (3) the judgment of each reviewer. Further, while the reviews were supported by a checklist that covered each of the items that was to be included in the laboratory applications and provided space for the reviewer(s) to make notes relative to each of these items, the checklists did not include any guidance or methodology, including criteria, for evaluating the submitted items. Rather, the EAC accreditation program director told us that he was the reviewer on all the accreditations and he applied his own, but undocumented, tests for reasonableness in deciding on the submissions’ adequacy and acceptability. Our analysis of the checklists for each laboratory accreditation showed that while the same checklist was used for each laboratory, the checklists did not provide a basis for evaluating and documenting the basis for the sufficiency of those documents. In some cases, additional communications occurred between the reviewer and the laboratory to obtain additional documents. However, no documentation was available to demonstrate what standards or other criteria the laboratories were held to or how their submissions were otherwise reviewed. For example, each of the checklists indicated that each laboratory provided “a copy of the laboratory’s conflict of interest policy.” However, they did not specify, for example, whether the policy adequately addressed particular requirements. Nevertheless, for three of the four accredited laboratories, documentation shows that EAC sought clarification on or modification to the policies provided, thus suggesting that some form of review was performed against more detailed requirements. Similarly, while the checklists indicate that the laboratories disclosed their respective coverage limits for general liability insurance policies, and in one case EAC communicated to the laboratory that the limits appeared to be low, no documentation specifies the expected coverage limits. According to the EAC Director of Voting System Testing and Certification, this determination was made after comparing limits among the laboratories and was not based on any predetermined threshold. Further, while the checklists indicate that each laboratory provided audited financial statements, there is no documentation indicating how these statements were reviewed. According to the EAC program director, the lack of documentation demonstrating the basis for EAC’s laboratory accreditations is due to the need at the time to move quickly in accrediting the laboratories and the fact that use of the same individual to review the accreditation evaluation negated the need for greater documentation. Without such documentation, however, we could not fully establish how the accreditations were performed, including whether there was an adequate basis for the accreditation decisions reached and whether they were performed consistently. The effectiveness of our nation’s overall election system depends on many interrelated and interdependent variables, including the security and reliability of voting systems. Both NIST and EAC play critical roles in ensuring that the laboratories that test these two variables have the capability, experience, and competence necessary to test a voting system against the relevant standards. NIST has recently established an accreditation program that largely accomplishes this, and while EAC is not as far along, it has a foundation upon which it can build. However, important elements are still missing from both programs. Specifically, the current NIST approach does not define requirements for assessor qualifications and training or ensure that assessments are fully documented. Additionally, EAC has not developed program management practices that are fully consistent with what NIST has found to be hallmarks of an effective accreditation program, nor has the agency adequately specified how evaluations are to be performed and documented. As a result, opportunities exist for NIST and EAC to further define and implement their respective programs in ways that promote greater consistency, repeatability, and transparency—and thus improve the results achieved. It is also important for NIST and EAC to follow through on their stated intentions to evolve their respective programs, building on what they have already accomplished through the development and execution of well-defined plans of action. If they do not, both will be challenged in their ability to consistently provide the American people with adequate assurance that accredited laboratories are qualified to test the voting systems that will eventually be used in U.S. elections. To help NIST in evolving its VSTL accreditation program, we recommend that the Director of NIST ensure that the accreditation program manager develops and executes plans that specify tasks, milestones, resources, and performance measures that provide for the following two actions: Establish and implement transparent requirements for the technical qualifications and training of accreditation assessors. Ensure that each laboratory accreditation review is fully and consistently documented in accordance with NIST program requirements. To help EAC in evolving its VSTL accreditation program, we recommend that the Chair of the EAC ensure that the EAC Executive Director develops and executes plans that specify tasks, milestones, resources, and performance measures that provide for the following action: Establish and implement practices for the VSTL accreditation program consistent with accreditation program management guidance published by NIST and GAO, including documentation of specific accreditation steps and criteria to guide assessors in conducting each laboratory review; transparent requirements for the qualifications of accreditation reviewers; requirements for the adequate maintenance of records related to the VSTL accreditation program; and requirements for determining laboratory financial stability. Both NIST and EAC provided written comments on a draft of this report, signed by the Deputy Director of NIST and the Executive Director of EAC, respectively. These comments are described below along with our response to them. In its comments, NIST stated that it appreciates our careful review of its VSTL program and generally concurs with our conclusions that its program must continue to evolve and improve. However, NIST also provided comments to clarify the current status of the program relative to three of our findings. With respect to our finding that NIST’s defined approach for accrediting VSTLs does not cite explicit qualifications for the persons who conduct the technical assessments, the institute stated that it does explicitly cite assessor qualifications for its overall national laboratory accreditation program, adding that this approach to specifying assessor qualifications has a proven record of success. It also stated that the overall program’s management manual requires all assessors to meet defined criteria in such areas as laboratory experience, assessment skills, and technical knowledge, and that candidate assessors must submit information addressing each of these areas as well as factors addressing technical competence in a given laboratory’s focus area (e.g., voting systems). Further, it stated that candidate assessors’ qualification ratings and rankings are captured in work sheets. In response, we do not disagree with any of these statements. However, our finding is that NIST’s defined approach for VSTL accreditation does not specify requirements for persons who assess those laboratories that specifically test voting systems. In this regard, NIST’s own written comments confirm this, stating that specific requirements for assessors are not separately documented for each of its national laboratory accreditation programs, such as the VSTL program. Therefore, we have not modified this finding or the related recommendation. Regarding our finding that NIST’s defined approach for accrediting VSTLs has not always cited the current voting system standards, the institute affirmed this in its comments by stating that the VSTL program handbook that it provided to us only cites the 2002 system standards, as these were the only standards in place when the handbook was published. However, NIST also noted that when the 2005 system guidelines were adopted in December 2005, it began the process of updating the handbook and associated assessment checklist, and that the handbook update was recently finalized for publication and is now being used. In response, we stand by our finding that NIST’s defined approach has not always cited the current voting system standards, which NIST acknowledges in its comments. However, we also recognize that NIST has recently addressed this inconsistency by finalizing its new handbook and the associated assessment checklist. In light of NIST’s recent actions, we have updated the report to acknowledge the finalization of the handbook and checklist, and removed the associated recommendation that was contained in our draft report for NIST to ensure that its defined approach addresses all required voting system standards. Regarding our finding that available documentation from completed accreditations does not show that NIST has consistently followed all aspects of its defined approach, the institute stated that, among other things, all required documents for its VSTL accreditation program are currently in use and reflect the recent update to its handbook and checklist, and that all these documents are securely maintained. In response, we do not question these statements; however, they are not pertinent to our finding. Specifically, our finding is that the four completed accreditations that we reviewed were not consistently documented. As we state in our report, we reviewed the documentation associated with the accreditation assessments for these four laboratories, and we found that all four were not documented in a similar manner, even though they were based on the same version of the program handbook. For example, neither the laboratory notifications of the scope of the assessment nor the next steps in the accreditation process were consistently documented. Therefore, we have not modified our finding, but have slightly modified our recommendation to make it clear that its intent is to ensure that all phases of the accreditation review are fully and consistently documented. In its comments, EAC described our review and report as being helpful to the commission as it works to fully develop and implement its VSTL program. It also stated that it agrees with the report’s conclusions that additional written internal procedures, standards, and documentation are needed to ensure more consistent and repeatable implementation of the program. The commission added that it generally accepts our recommendations and will work hard to implement them. To assist it in doing so, it sought clarification about two of our recommendations, as discussed below. EAC stated that the recommendation in our draft report for the commission to develop specific accreditation steps and criteria was broadly worded, and thus the recommendation’s intent was not clear. EAC also stated that it interpreted the recommendation to mean that it should define internal instructions to guide assessors in performing an accreditation, and that the recommendation was not intended to have any impact on its published requirements and procedures governing, for example, granting, suspending, or withdrawing an accreditation. We agree with EAC’s interpretation, as it is in line with the intent of our recommendation. To avoid the potential for any future misunderstanding, we have modified the wording of the recommendation to clarify its intent. EAC stated that the recommendation in our draft report for the commission to develop transparent technical requirements for the qualifications of its assessors may be confusing because, as we state in our report, only NIST performs a technical accreditation review, as EAC’s review is administrative, non-technical in nature. To avoid the potential for any confusion, we have modified the wording of the recommendation to eliminate any reference to technical qualification requirements. We are sending copies of this report to the Ranking Member of the House Committee on House Administration, the Chairman and Ranking Member of the Senate Committee on Rules and Administration, the Chairmen and Ranking Members of the Subcommittees on Financial Services and General Government, Senate and House Committees on Appropriations, and the Chairman and Ranking Member of the House Committee on Oversight and Government Reform. We are also sending copies to the Chair and Executive Director of EAC, the Secretary of Commerce, the Deputy Director of NIST, and other interested parties. We will also make copies available to others on request. In addition, this report will be available at no charge on the GAO Website at http://www.gao.gov. Should you or your staffs have any questions on matters discussed in this report, please contact me at (202) 512-3439 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. Our objectives were to determine whether the National Institute of Standards and Technology (NIST) and the Election Assistance Commission (EAC) have defined effective voting system testing laboratory (VSTL) accreditation approaches, and whether each is following its defined approach. To determine whether NIST has defined an effective accreditation approach, we reviewed documentation from its VSTL accreditation program, such as handbooks and program manuals for the National Voluntary Laboratory Accreditation Program (NVLAP), of which the VSTL accreditation program is a part. In doing so, we compared these documents with applicable statute, guidance, and best practices, primarily the Help America Vote Act of 2002 (HAVA), internationally recognized standards from the International Organization for Standardization (ISO), and federal accreditation program management guidance published by NIST. We compared program documentation with HAVA’s NIST-specific accreditation requirements to determine the extent to which the agency was fulfilling its HAVA responsibilities. We also reviewed program documentation against ISO/IEC 17011, which establishes general requirements for accreditation bodies, and ISO/IEC 17025, which establishes the general requirements for assessing the competence of laboratories, to determine the extent to which NIST’s accreditation program was based on internationally recognized standards. We also compared the documentation against NIST publication NISTIR 6014, which contains sections that provide guidance for laboratory accreditation programs, to determine whether the VSTL accreditation program had defined other elements of effective accreditation programs. We also interviewed the voting accreditation program manager to determine how these documents were used to guide the program. To determine whether NIST has followed its defined approach, we examined artifacts from the accreditation assessments of five VSTLs, including one laboratory accredited by NVLAP, but not yet recommended to EAC. This material included completed assessment checklists derived from the accreditation program handbooks, additional documents supporting the assessments, and laboratory accreditation applications and supporting documentation. We compared artifacts from these assessments to program guidance to determine the extent to which the defined process was followed. In addition, we interviewed officials from NIST and NIST contract assessors and officials from EAC and the four EAC-accredited VSTLs to understand how the NIST process was implemented and how it related to the process managed by EAC. To determine whether EAC has defined an effective accreditation approach, we reviewed documentation from its VSTL accreditation program, such as the draft Voting System Test Laboratory Accreditation Program Manual. In doing so, we compared this document with applicable statute and best practices, primarily HAVA and federal accreditation program management guidance published by NIST. We compared the draft program manual with HAVA’s EAC-specific accreditation requirements to determine the extent to which the agency was fulfilling its HAVA responsibilities. We also compared the documentation against the accreditation guidance in NISTIR 6014 to determine whether the accreditation program had defined other elements of effective accreditation programs. We also interviewed the EAC voting program director and executive director to determine how these documents were used to guide the program and to understand EAC’s defined accreditation approach prior to the development of the draft manual. To determine whether EAC has followed its defined approach, we compared artifacts from the accreditation reviews of four VSTLs. We did not review a fifth laboratory, which had been accredited by NVLAP, but not yet recommended to EAC. The materials reviewed included checklists completed by EAC in the absence of an approved program manual. In doing so, we compared the review artifacts to accreditation program requirements, as communicated to the laboratories, to determine the extent to which the agency followed its process, as verbally described to us. We did not compare accreditation submissions or EAC review artifacts with the draft accreditation manual because agency officials stated that the draft manual had not been used in the review of any laboratory. In addition, we interviewed officials from NIST, EAC, and the four EAC- accredited VSTLs to understand how the EAC process was implemented and how it related to the process managed by NIST. To assess data reliability, we reviewed program documentation to substantiate data provided in interviews with knowledgeable agency officials. We have also made appropriate attribution indicating the data’s sources. We conducted this performance audit at EAC and NIST offices in Washington, D.C., and Gaithersburg, Maryland, respectively, from September 2007 to September 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Paula Moore, Assistant Director; Justin Booth; Timothy Case; Neil Doherty; Timothy Eagle; Nancy Glover; Dave Hinchman; Rebecca LaPaze; Freda Paintsil; Nik Rapelje; and Jeffrey Woodward made key contributions to this report.
The 2002 Help America Vote Act (HAVA) created the Election Assistance Commission (EAC) and assigned both it and the National Institute of Standards and Technology (NIST) responsibilities for accrediting laboratories that test voting systems. NIST assesses a laboratory's technical qualifications and makes recommendations to EAC, which makes a final accreditation decision. In view of the continuing concerns about voting systems and the important roles that NIST and EAC play in accrediting the laboratories that test these systems, GAO was asked to determine whether each organization has defined an effective approach for accrediting laboratories that test voting systems and whether each is following its defined approach. To accomplish this, GAO compared NIST and EAC policies, guidelines, and procedures against applicable legislation and guidance, and reviewed both agencies' efforts to implement them. NIST has largely defined and implemented an approach for accrediting voting system testing laboratories that incorporates many aspects of an effective program. In particular, its approach addresses relevant HAVA requirements and reflects relevant laboratory accreditation guidance, including standards accepted by the international standards community. However, NIST's defined approach does not, for example, cite explicit qualifications for the persons who conduct accreditation technical assessments, as called for in federal accreditation program guidance. Instead, NIST officials said that they rely on individuals who have prior experience in reviewing such laboratories. Further, even though the EAC requires that laboratory accreditation be based on demonstrated capabilities to test against the latest voting system standards, NIST's defined approach has not always cited these current standards. As a result, two of the four laboratories accredited to date were assessed using assessment tools that were not linked to the latest standards. Moreover, available documentation for the four laboratory assessments was not sufficient to determine how the checklists were applied and how decisions were reached. According to NIST officials, the four laboratories were consistently assessed. Moreover, they said that they intend to evolve NIST's accreditation approach to, for example, clearly provide for sufficient documentation of how accreditation reviews are conducted and decisions are reached. However, they had yet to develop specific plans for accomplishing this. EAC recently developed a draft laboratory accreditation program manual, but this draft manual does not adequately define all aspects of an effective approach, and it was not used in the four laboratory accreditations performed to date. Specifically, while this draft manual addresses relevant HAVA requirements, such as the requirement for the commissioners to vote on the accreditation of any laboratory that NIST recommends for accreditation, it does not include a methodology governing how laboratories are to be evaluated or criteria for granting accreditation. Because the manual was not approved at the time EAC accredited four laboratories, these accreditations were governed by a more broadly defined accreditation review process that was described in correspondence sent to each laboratory and a related document receipt checklist. As a result, these accreditations were based on review steps that were not sufficiently defined to permit them to be executed in a repeatable manner. According to EAC officials, including the official who conducted the accreditation reviews for the four laboratories, using the same person to conduct the reviews ensured that the steps performed on the first laboratory were repeated on the other three. However, given that both the steps and the results were not documented, GAO could not verify this. EAC officials stated that they intend to evolve the program manual over time and apply it to future accreditations and reaccreditations. However, they did not have specific plans for accomplishing this. Further, although EAC very recently approved an initial version of its program manual, this did not occur until after EAC provided comments, and GAO had finalized, this report.
Aliens applying for permanent residency and naturalization are required to submit completed fingerprint cards with their applications. INS is to send each fingerprint card to the FBI to determine if an alien has a criminal history. Aliens with criminal history records may be denied benefits depending on the severity of the offenses. During fiscal year 1993, the FBI ran 866,313 fingerprint checks at a cost to INS of $14.7 million. In addition to the aliens’ fingerprints, the fingerprint cards are to contain background information on the alien, such as name and date of birth. Aliens applying for permanent residency or naturalization are to be scheduled for hearings after they submit their applications. According to INS officials, the hearing dates are to be set to allow adequate time for the FBI to complete criminal history checks and to return the results (for aliens with arrest records) to INS. According to INS officials, aliens can have their fingerprints taken at several locations including, private businesses, the offices of voluntary organizations, police departments, and some INS district offices. INS officials said that prior to the time of the enactment of the Immigration Reform and Control Act of 1986, all INS offices provided fingerprinting services for aliens requesting benefits. However, according to INS officials, most INS offices have discontinued fingerprinting services for a number of reasons, including a lack of staff. After INS accepts aliens’ applications, clerks in INS’ district offices are to separate the fingerprint cards from the applications and mail the cards to the FBI. According to the FBI, it checks the fingerprint cards to determine if data on the alien’s name, gender, date of birth, and the originating INS district office have been completed. If any of the information is missing, the FBI rejects the card and returns it to the originating INS office, if known, with an explanation for the rejection. If the background information on the fingerprint card is complete, the FBI checks the fingerprints against its criminal record history database, which contains the names of over 30 million people. If a match is found, the criminal history record is attached to the fingerprint card and mailed to the INS district office that requested the check. At the request of INS, the FBI does not notify INS if no criminal history record was found. The FBI rejects fingerprint cards if one or more of the prints are illegible and returns the rejected cards to INS offices with an explanation for their rejection. Even if the fingerprints are illegible, the FBI will run a name check comparing the aliens’ name, including background information, to the names in its criminal history database. If no positive identification is found, the rejected fingerprint card is returned to the INS requesting district office. INS officials are to submit a new fingerprint card to the FBI if the original fingerprint card is rejected. According to the FBI, it takes about 10 to 14 days to complete a name and fingerprint check for INS (from its receipt of the fingerprint card to the mailing of the results to INS). According to INS officials, INS offices usually receive rejected fingerprint cards or criminal history reports in the mail room. The cards are then taken to the Examinations Branch or Records Department, where they are to be placed in the aliens’ files. Criminal history reports are to be placed in the aliens’ files before their hearings with INS examiners. INS offices are to allow at least 60 days from the date an alien submits an application until the scheduled hearing date to allow the FBI adequate time to complete a criminal records check, return any adverse results, and allow INS to place those results in the alien’s file. In commenting on a draft of this report, INS officials provided some perspective on the significance of failure to check aliens’ fingerprints. According to INS, the ideal situation would be to check the fingerprints of every applicant. Any fingerprint not checked potentially belongs to a criminal or terrorist. However, INS stated that the actual probability that a properly obtained and checked set of fingerprints will result in an alien’s application being denied and the alien being deported is very remote. INS pointed out that only 5.4 percent of fingerprint checks result in the FBI having a record on the alien and only a small portion of the 5.4 percent result in an alien’s application being denied. While INS recognized that even a relatively small number of aliens should not inappropriately receive benefits, it did not want to give the false impression that a criminal or terrorist receives a benefit every time a fingerprint check is not properly conducted. The February 1994 OIG report stated that INS did not verify that fingerprints submitted by applicants for naturalization and permanent residency actually belonged to aliens who submitted them. The OIG report also pointed out that INS examiners had approved applications because they assumed that applicants had no criminal history records. According to the OIG report, this occurred because the FBI criminal history records were not in the aliens’ files when INS examiners adjudicated the cases. The OIG report also found that INS frequently did not submit new sets of fingerprints to the FBI when the original sets of prints were illegible. The OIG recommended that INS (1) institute procedures to verify that fingerprints submitted to INS by all applicants belong to the applicants and (2) instruct district directors to ensure that fingerprint cards are mailed promptly and criminal history reports are placed in the aliens’ files before final adjudication. INS concurred with the OIG findings and recommendations. In May 1994, INS formed a working group to address problems identified by the OIG. The group is composed of representatives from various INS service components and advisers from the FBI and OIG. To achieve our objectives, we (1) discussed the fingerprinting process with INS officials at INS headquarters in Washington, D.C., and INS’ Baltimore, Chicago, and Philadelphia District Offices and (2) reviewed INS records regarding changes to its fingerprinting process. We also observed fingerprinting procedures at these district offices. We selected the Baltimore and Chicago District Offices because they were included in the OIG report, and, therefore, we could evaluate INS’ responses. To provide perspective on the problems the OIG identified, we selected a district office not included in the OIG review. We selected the Philadelphia District Office because of its proximity to Washington, D.C. Specifically, we evaluated INS’ actions and plans in response to the problems identified in the OIG report, including the timely mailing of fingerprint cards to the FBI, the timely filing of FBI criminal history reports, and the procedures used to follow up on fingerprint cards rejected by the FBI. Further, we discussed the future impact of automated fingerprinting identification systems with INS and FBI officials. We discussed FBI processing procedures for alien fingerprints submitted by INS with FBI officials in Washington, D.C. We relied on information in the OIG report and did not verify data provided by INS and the FBI. We conducted our review from July 1994 to October 1994 in accordance with generally accepted government auditing standards. We obtained oral comments on a draft of this report from INS and the FBI. Their comments are discussed in the agency comments section of this report. INS’ fingerprinting working group has recommended that INS implement a certification program that would increase control over fingerprint providers. INS headquarters is finalizing a new regulation to establish and implement the certification program. INS expects the regulation to be published by March 1995. According to INS, after a 6-month transition period following publication of the regulation, INS will accept only fingerprints taken by organizations it has certified. Under the proposed certification program, all organizations, except police departments and the U.S. military, who want to provide fingerprint services to aliens will have to apply for INS certification. Fingerprint providers will have to pay an application fee (currently estimated at $370). Under the certification process, INS will require that employees, volunteers, directors, and owners of the organizations providing fingerprint services undergo fingerprint checks to determine if they have criminal histories. Depending on the results of the fingerprint checks, an applicant may not be certified. If an application is accepted, INS will certify the provider for 3 years. INS plans to require certified fingerprint providers to inspect aliens’ photo identification and have aliens sign their fingerprint cards at the time the fingerprints are taken. The proposed regulation also will require fingerprint providers to be trained in fingerprinting procedures by INS. All approved organizations are to be given a stamp that is yet to be developed by INS. The stamp is to serve as a method for notifying INS that prints were taken by an approved provider. The stamp is also to allow INS to identify problematic providers—such as producers of large numbers of illegible prints. INS plans to monitor fingerprint providers using INS district employees to spot-check local certified providers to ensure that INS procedures are being followed. Under the current draft of the regulation, INS will have the authority to revoke fingerprinting privileges if the agency discovers that a provider is not following INS guidelines. INS plans to use the fees from organizations applying for certification to pay for the monitoring program. According to the draft regulation, INS will monitor one-third of all fingerprint providers each year. INS considered other alternatives to the certification program. The working group rejected the option of having the district offices do the fingerprinting because of resource shortages and the potential for overcrowding in the district offices. Other options included using contractors, police departments, and voluntary groups. The use of contractors was rejected because of potential difficulty in managing nationwide or regional contracts. Using police departments for fingerprinting was not considered feasible because many police departments do not provide fingerprinting services to the public, including aliens. Also, according to INS, some police departments were believed to have a higher rate of rejections than other providers. INS decided not to depend on voluntary groups because there are not enough voluntary groups to do all the fingerprinting. However, these groups may apply for certification. INS said that its long-term solution to the fingerprinting processing problems will be the use of electronic fingerprinting. In this regard, the FBI is developing an Integrated Automated Fingerprint Identification System (IAFIS) that will allow the electronic submission and processing of fingerprints. IAFIS is expected to dramatically reduce turnaround time for fingerprint processing. IAFIS is not expected to be fully operational before mid-1998. INS anticipates the use of IAFIS but will have to purchase hardware to enable the system to transmit information electronically to the FBI. According to INS, it is actively pursuing the use of its own automated fingerprint identification systems to reduce fingerprint fraud and processing time. Also, INS is closely coordinating its efforts with the FBI to ensure compatibility and reduced rejection rates. The OIG review of four INS district offices found problems with timely mailing of fingerprint cards to the FBI and timely filing of returned criminal history reports from the FBI. Our review indicated that these problems existed in varying degrees in the three districts we visited. Also, we found that INS examiners assumed a fingerprint check had been completed if a criminal history record was not found in the district office. In the Chicago District Office, the OIG found that fingerprint cards were allowed to accumulate for 2 to 3 weeks before they were mailed to the FBI. As part of its review, the OIG only examined the files of aliens who had arrest records to determine if INS was properly filing FBI arrest reports in aliens’ files. The OIG used an FBI list of aliens who had criminal history records to identify which alien files to review. The OIG found that 29 percent of the 271 files it reviewed in 4 district offices lacked arrest reports. In the Chicago District the OIG found that 78 percent of the alien files it reviewed did not contain the criminal history reports at the time the cases were adjudicated. In a March 1994 memorandum to all district directors, INS headquarters directed them to ensure that alien fingerprints are sent to the FBI daily. INS also instructed district directors to ensure that criminal history records received from the FBI are placed in the alien files immediately. Although INS headquarters instructed its districts to ensure both timely mailing of fingerprints and timely filing of criminal history reports, headquarters had not monitored the districts to ensure that its policies were being properly followed. An INS official said that in the past it was necessary for headquarters to follow up on its directives to ensure that the policies were being followed. The Baltimore and Chicago District Offices made some changes to improve timely mailing of fingerprint cards. For example, Baltimore district officials said that they recently began separating fingerprint cards from naturalization applications and putting those cards in the mail on the same day that the applications were received. Chicago and Philadelphia district officials said that their fingerprint cards may not have been mailed for 1 or 2 weeks. Chicago District Office officials said they planned to rearrange the routing of applications to expedite mailing of fingerprint cards to the FBI. In August 1994, the Baltimore District Office began a prototype program in which aliens applying for permanent residency are to send their applications directly to INS’ Eastern Service Center in St. Albans, VT. The Baltimore District conducts the hearings, and the INS service center processes the fingerprint cards. Criminal history reports are sent to the Baltimore office before the aliens’ hearings. According to a Baltimore district official, although the program is new all indications are that it has resulted in criminal history reports arriving before hearings. In the three districts we visited, rejected fingerprint cards and criminal history records were received in the mail rooms and transferred to the Examinations Office for filing. However, district officials and examiners at these three districts said that criminal history reports were not always placed in aliens’ files before their hearings. The criminal history report filing systems varied at the three district offices. Chicago district officials said they were reorganizing the filing system, working toward a goal of filing all criminal history reports directly in aliens’ files. However, at the time of our review, Chicago was using two filing systems. Criminal history reports were either filed directly in alien files or placed in a central file. As a result, Chicago examiners had to review both the alien’s file and the centralized file of criminal history reports before an alien’s hearing to determine if the alien had a criminal history record. Baltimore and Philadelphia District Office officials said that criminal history reports were typically filed directly in the aliens’ files within 3 working days after they were received so they would be available to the examiners during the aliens’ interviews. Examiners at all three districts indicated that they had incidents in which a criminal history record was not available when the examiner conducted a hearing and granted the benefit to the alien. If examiners become aware of an alien’s criminal history record after the initial hearing, the alien may be interviewed again depending on the severity of the offense. This can occur after INS has granted the alien benefits. If the results of the fingerprint checks warrant, INS may rescind the previously granted benefit. The examiners said that the fingerprint checks are important and noted that about half of the time the information provided by the FBI criminal history report is the only information they have about an alien’s criminal activity. Examiners do not have any means to determine the status of an FBI fingerprint check because, at INS’ request, the FBI does not return the results of all fingerprint checks. INS receives results only if an arrest record is found. At the time of the alien’s hearing, if INS examiners do not find a criminal history report in an alien’s file and it is 60 days after the application date, the examiners assume that a fingerprint check has been completed and that the alien does not have a criminal history record. According to an INS official, they do not receive negative responses from the FBI because the district offices do not have enough staff to file FBI responses for all aliens. According to the FBI, they could provide INS with the results of all records checks in other formats (e.g., electronically), including those for whom it did not find criminal history records. During fiscal year 1993, the FBI rejected and returned to INS 91,827 fingerprint cards, or 11 percent of all INS submissions, because one or more of the prints were illegible. The OIG determined that INS district offices frequently did not submit new fingerprint cards for those aliens whose fingerprint cards were rejected. Since INS failed to submit new fingerprint cards, in a number of cases applications were adjudicated on the basis of criminal history name checks but without the results of the FBI fingerprint checks. In April 1994, INS headquarters instructed district directors to ensure that new fingerprint cards are submitted if the initial card is rejected. However, according to INS officials at the three districts we visited, these district offices rarely submitted new fingerprint cards if the initial card was rejected. INS’ decision to implement a certification program for fingerprint providers with the proposed procedures for ensuring fingerprint integrity, if properly implemented, should address the OIG’s first recommendation. The program should help ensure that the fingerprints aliens submit with applications are their own. Further, INS plans to periodically monitor the providers, which should help to maintain the integrity of the fingerprint process. INS headquarters had directed its district offices to timely submit fingerprint cards to the FBI and file FBI criminal history reports in aliens’ files. However, there are problems to varying degrees in the Chicago and Philadelphia Districts. Also, officials at the three district offices said they rarely submitted new fingerprint cards if the initial cards were rejected by the FBI. According to the OIG report and INS officials, some aliens’ applications had been approved because the examiners did not receive, and therefore were not aware of, aliens’ criminal history records. They said that if the examiners had been aware of the information contained on the criminal history records the applications could have been denied. INS had told the district offices to correct the problems but had not monitored the districts’ efforts to follow those instructions. Without some form of monitoring, INS cannot be certain that the district offices will correct the problems. At INS’ request, the FBI returned information to districts only if an alien had a criminal history record or if the fingerprints were rejected. As a result, INS was not notified if a fingerprint check was successfully completed and no criminal record was found. If no information was in the aliens’ files or in a central location, examiners assumed that the aliens did not have criminal history records. As noted earlier, this assumption can be incorrect. We recommend that the Attorney General direct the Commissioner of INS to monitor progress to ensure that districts comply with INS’ headquarters directives to submit fingerprint cards to the FBI on a timely basis, file FBI arrest reports in aliens’ files immediately, and submit new fingerprint cards to replace those that are rejected by the FBI and obtain the results from the FBI of all its record and fingerprint checks, including those aliens who do not have criminal history records and make the results available to the examiners before the aliens’ hearings. On November 9, 1994, we obtained oral comments on a draft of this report separately from INS and FBI officials. We met with INS representatives, including the Acting Associate Commissioner for Examinations, who is responsible for INS’ adjudication of applications which requires aliens to be fingerprinted. We also met with FBI officials, including the Deputy Assistant Director of the Criminal Justice Information Services Division, who responds to INS’ requests for criminal records checks of aliens. They agreed with our findings, conclusions, and recommendations and provided clarifications and technical corrections, which we included in the report. We are providing copies of this report to the Attorney General; Commissioner of INS; Director, Office of Management and Budget; and other interested parties. Copies will also be made available to others upon request. Major contributors to this report are James M. Blume, Assistant Director; Mark A. Tremba, Evaluator-in-Charge; and Jay Jennings, Assignment Manager. If you need any additional information or have any further questions, please contact me on (202) 512-8777. Laurie E. Ekstrand Associate Director, Administration of Justice Issues The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Immigration and Naturalization Service's (INS) fingerprinting procedures for aliens applying for immigration and naturalization benefits, focusing on: (1) INS efforts to ensure that the fingerprints aliens submit are their own; (2) options INS considered to improve the fingerprinting process; (3) the future impact of automated fingerprinting identification systems; (4) INS efforts to ensure timely mailing of fingerprint cards to the Federal Bureau of Investigation (FBI) and timely filing of FBI criminal history reports; and (5) INS actions to follow up on fingerprint cards rejected because of illegibility or incomplete information. GAO found that: (1) INS plans to implement a certification and training program in 1995 for fingerprint providers and establish fingerprinting procedures to improve control over the fingerprinting process; (2) INS plans to monitor fingerprint providers at least every 3 years to ensure that they follow established procedures; (3) INS has decided not to have district offices do the fingerprinting due to a lack of resources and potential overcrowding at the offices; (4) INS has also rejected the option of having contractors, police departments, and volunteer groups do the fingerprinting; (5) INS plans to use a FBI-developed automated fingerprint identification system to electronically transmit information and reduce processing time; (6) INS has instructed district directors to correct problems with the mailing of fingerprint cards to FBI, filing FBI criminal history reports, and resubmission of rejected fingerprint cards, but it has not monitored the districts' progress in correcting these problems; (7) INS examiners sometimes approve an alien's application without a criminal history check because they assume one has been done even if it is not in the alien's file; and (8) INS examiners sometimes cannot determine if FBI fingerprint checks have been completed because FBI only returns reports when criminal histories are found.
While the term “disaster assistance” brings to mind the aid provided to communities and individuals after a disaster has struck, the scope of federal disaster assistance is broader. Disaster assistance involves aid provided both before and after disasters and it involves many federal agencies besides FEMA, including the U.S. Army Corps of Engineers (the Corps), the Small Business Administration (SBA), and the Departments of Agriculture, Transportation, the Interior, Commerce, and Housing and Urban Development. Moreover, these and other agencies may provide assistance under a number of different statutory authorities. Because of the numerous agencies and programs involved in providing disaster assistance, controlling federal disaster assistance costs is a difficult challenge. FEMA is an independent agency charged with helping states and localities address natural disasters. Under the Robert T. Stafford Disaster Relief and Emergency Assistance Act (the Stafford Act), FEMA provides financial and technical assistance to communities and individuals. In its role as coordinator of federal assistance, FEMA may request that other federal agencies provide a specific type of assistance. FEMA’s “blueprint” for the federal response to disasters, the Federal Response Plan, is a cooperative agreement signed by 26 federal agencies and the American Red Cross. Under the Comprehensive Emergency Management concept—a concept that assumes all disasters, regardless of their size, require the same basic government strategies—disaster management is viewed as consisting of four phases, of which the first two occur before a disaster strikes. Preparedness activities are designed to help communities and governments prepare for dealing with natural disasters; included are the development of response plans, establishing the location and identity of needed resources, planning for the evacuation of residents, and training for emergency officials. Mitigation activities are undertaken to reduce the losses from disasters or prevent losses from occurring; examples include constructing dams and flood control projects, retrofitting structures to withstand earthquakes, and developing land-use plans and zoning ordinances to discourage development of hazardous areas. Response activities are accomplished during or immediately following a disaster; examples include providing temporary shelter, food, and medical supplies and meeting other urgent needs of victims. Recovery activities are those that help individuals and communities rebuild following a disaster; for example, the repair or reconstruction of public facilities such as roads, water distribution systems, government buildings, and parks. Traditionally, the role of the federal government has been to supplement the emergency management efforts of state and local governments, voluntary organizations, and private citizens; federal policy generally assumes that states (and units of local government) maintain primary responsibility. The Stafford Act contains several statements explicitly acknowledging the primary role of states. Under the act, postdisaster assistance may be provided only if the President, at the request of a state governor, declares that an emergency or disaster exists and that federal resources are required to supplement state and local resources. For a number of reasons, including a sequence of unusually large and costly disasters, federal disaster assistance costs have increased in recent years. Much of the spending is overseen by FEMA—obligations from FEMA’s Disaster Relief Fund totaled about $3.6 billion in fiscal year 1996 and about $4.3 billion in fiscal year 1997—but many other federal agencies are involved as well. In our work for the Senate Task Force, we compiled financial data from many federal agencies concerning their disaster assistance programs and activities—which encompass all phases of emergency management—for fiscal years 1977 through 1993. (Fiscal year 1993 was the latest complete fiscal year at the time we did our work.) However, with limited exceptions, we have not done work over the past few years that would have provided us with similar data for fiscal years 1994 forward, and thus we do not know how overall costs, or their distribution among emergency management phases, may have changed. According to data compiled for the Senate Task Force, postdisaster recovery accounted for by far the largest portion of federal disaster assistance (in constant 1993 dollars)—about $87 billion, almost three-quarters of the $119.7 billion total federal disaster assistance from fiscal years 1977 through 1993. Of the $87 billion, about $55.3 billion consisted of various disaster recovery loans made primarily by SBA and USDA; because some portion of the loans will be repaid, the entire loan amount is not necessarily a federal cost. Of the remaining $31.7 billion, FEMA accounted for about one-third—$10.2 billion. Other significant amounts of disaster assistance provided were the nearly $4.1 billion obligated by the Department of Transportation for repairs to federal-aid highways and the $16 billion obligated by USDA to compensate farmers for production losses from disasters. Disaster mitigation accounted for the second-largest category of federal disaster assistance obligations—about $27 billion, or 22 percent. As we noted in our statement for this Subcommittee in late January, FEMA provides mitigation assistance under several programs and authorities and has taken a strategic approach to mitigation. However, the large majority—about $25 billion—of federal mitigation obligations during fiscal years 1977 through 1993 was made by the Corps of Engineers for the design, construction, operation, and maintenance of flood control and coastal erosion control facilities. Other federal disaster mitigation efforts include (1) establishing floodplain management and building standards required by FEMA’s National Flood Insurance Program and (2) conducting earthquake research and related activities under the National Earthquake Hazards Reduction program, jointly administered by FEMA, the U.S. Geological Survey, the National Institute of Standards and Technology, and the National Science Foundation. The remainder of total federal disaster assistance reported to the Senate Task Force was obligated for immediate responses to disasters (about $3.4 billion) and for preparedness activities (about $2.3 billion). In both cases, FEMA accounted for the majority of the costs. The occurrence of large disaster assistance costs in the 1990’s has been attributed to a number of factors. Since 1989, the United States has experienced a sequence of unusually large and costly disasters, including Hurricane Hugo, the Loma Prieta earthquake, Hurricane Andrew, Hurricane Iniki, the 1993 Midwest floods, and the Northridge earthquake. The close occurrence of such costly disasters in the United States is unprecedented. Furthermore, increases in population and development, especially in hazard-prone areas, increase the potential losses associated with these disaster events. For example, FEMA expects that by the year 2010 the number of people living in the most hurricane-prone counties (36 million in 1995) will double. For several of these large disasters, the federal government has borne a larger-than-usual share of the costs. The Stafford Act provides that many disaster relief costs are to be shared by the federal government with the affected states and localities. For example, the federal share of funding is at least 75 percent for public assistance projects (to repair or replace disaster-damaged public and nonprofit facilities). Following several more recent disasters, the President has raised the federal share for some of these costs; for example, to 90 percent for the Northridge earthquake and to 100 percent for Hurricane Andrew. There has also been an upward trend in the annual number of presidential disaster declarations. The Stafford Act authorizes the President to issue major disaster or emergency declarations and specifies the types of assistance the President may direct federal agencies to provide. For fiscal years 1984 through 1988, the average number of such declarations was 26 per year, whereas, for the periods from fiscal years 1989 through 1993 and from fiscal years 1994 through 1997, the average number was nearly 42 and 49 per year, respectively. Additionally, more facilities have become eligible for disaster assistance. Over the years, the Congress has generally increased eligibility through legislation that expanded the categories of assistance and/or specified persons or organizations eligible to receive assistance. For example, 1988 legislation expanded the categories of private nonprofit organizations that are eligible for FEMA’s public assistance program. FEMA can influence program costs by establishing and enforcing procedures and criteria for assistance within the eligibility parameters established in statutes. FEMA’s Inspector General reported in 1995 that the agency’s administrative decisions on eligibility for disaster assistance—such as the threshold for determining whether to repair or replace a damaged public facility—may have expanded federal disaster assistance costs. We have recommended that FEMA improve program guidance and eligibility criteria in part to help control these costs. According to the Senate Task Force report, federal budgeting procedures for disaster assistance may have influenced amounts appropriated for disaster assistance. This is because disaster relief appropriations have often been designated as “emergency” spending. If the Congress and the President agree to designate appropriations as emergencies, the appropriations are excluded from the strict budget disciplines that apply to other spending—specifically, the discretionary spending limits under the Balanced Budget and Emergency Deficit Control Act of 1985, as amended by the Budget Enforcement Act of 1990. As noted in the task force report, funds for natural disasters and other emergencies will undoubtedly be needed from time to time in amounts that are impossible to predict and thus difficult to budget for. On the other hand, one criticism of the procedures for emergency spending is that the assistance provided is more “generous” than would be the case if it had to compete with other spending priorities. Approaches for lowering federal disaster assistance costs include (1) establishing more explicit and/or stringent criteria for providing federal disaster assistance, (2) emphasizing hazard mitigation through various incentives, and (3) relying more on insurance. Within these approaches, specific proposals—made by various entities, including the National Research Council, National Performance Review, and FEMA’s Inspector General—vary. The extent to which the implementation of these approaches would lower the costs of federal disaster assistance is unknown. One approach to lower disaster assistance costs is to establish more explicit and/or stringent criteria for providing federal disaster assistance. Currently, much assistance is contingent on the President’s declaration of an emergency or major disaster under the Stafford Act, 42 U.S.C. 5170, which provides that requests for declarations (and therefore federal assistance) “shall be based on a finding that the disaster is of such severity and magnitude that effective response is beyond the capabilities of the State and the affected local governments and that federal assistance is necessary.” State governors request such declarations; FEMA gathers and analyzes facts and makes a recommendation to the President. However, the Stafford Act does not prescribe specific criteria to guide FEMA’s recommendation or the President’s decision. FEMA considers a number of factors, such as the number of homes destroyed or sustaining major damage, but there is no formula for applying them quantitatively. The flexibility and generally subjective nature of FEMA’s criteria have raised questions about the consistency and clarity of the disaster declaration process. FEMA’s Inspector General reported in 1994 that (1) neither a governor’s findings nor FEMA’s analysis of capability is supported by standard factual data or related to published criteria and (2) FEMA’s process does not ensure equity in disaster declarations because it does not always review requests for declarations in the context of previous declarations. In response to specific congressional concerns about the process, we have reviewed and reported on the potential effects of two factors—political party affiliation and the nature of the affected area. In 1989, we reported that, for disaster declaration requests made in fiscal year 1988 and a portion of fiscal year 1989, we found no indication that political party affiliation affected the President’s decisions. In 1995, we reported that FEMA’s disaster declaration policies and procedures do not differ with respect to whether the affected area is considered rural or urban. More explicit criteria for disaster declarations could provide a number of potential benefits. A 1993 report conducted by the National Performance Review concluded that “clear criteria need to be developed for disaster declarations to help conserve federal resources.” Additionally, we previously reported that disclosing the process for evaluating requests would help state and local governments decide whether they had a valid request to make, enable them to provide more complete and uniform information, and minimize doubts as to whether their requests were treated fairly and equitably. A second approach to reduce costs is to emphasize hazard mitigation through incentives. Mitigation consists of taking measures to prevent future losses or to reduce the losses that might otherwise occur from disasters. For example, building codes that incorporate seismic design provisions can reduce earthquake damage. In hearings before the U.S. Senate, the Director of the California Office of Emergency Services testified that structures designed and built to seismic design provisions of the state’s Uniform Building Code withstood the forces of the Loma Prieta earthquake with little or no damage while structures built to lesser code provisions suffered extensive damage. Additionally, floodplain management and building standards required by the National Flood Insurance Program may reduce future costs from flooding. For example, FEMA estimates that the building standards that apply to floodplain structures annually prevent more than $500 million in flood losses. At a September 1993 congressional hearing, the FEMA Director said that structures built after communities join the program suffer 83 percent less damage than those built before the standards were in place. There are a number of approaches that can provide federal incentives to encourage hazard mitigation. Our March 1995 testimony discussed recommendations by FEMA, the National Research Council, and the National Performance Review promoting the use of federal incentives to encourage hazard mitigation. For example, specific initiatives for improving earthquake mitigation included linking mitigation actions with the receipt of federal disaster and other assistance and providing federal income tax credits for investments to improve the performance of existing facilities. Furthermore, to the extent that the availability of federal relief inhibits mitigation, amending postdisaster federal financial assistance could help prompt cost-effective mitigation. The National Performance Review, for example, recommended providing relatively more disaster assistance to states that had adopted mitigation measures than to states that had not. These or other proposals would require analysis to determine their relative costs and effectiveness. FEMA’s September 1997 strategic plan, entitled “Partnership for a Safer Future,” states that the agency is concentrating its activities on reducing disaster costs through mitigation because “no other approach is as effective over the long term.” The agency’s hazard mitigation efforts include grants and training for state and local governments; funding for mitigating damage to public facilities and purchasing and converting flood-prone properties to open space; federal flood insurance; and programs targeted at reducing the loss of life and property from earthquakes and fires. However, as we noted in our previous testimony for the Subcommittee, quantifying the effects of mitigation efforts can be difficult. Specifically, determining the extent to which cost-effective mitigation projects will result in federal dollar savings is uncertain, as it depends on the actual incidence of future disaster events and the extent to which the federal government would bear the resulting losses. A third approach to reduce disaster assistance costs is to rely more on insurance. Insurance provides a way of “prefunding” disaster recovery because premiums provide a source of funds for compensating the victims of disaster losses. Like other forms of disaster relief, insurance spreads the burden of the losses borne by the disaster victims over a large number of individuals, potentially reducing the effect of the disaster on the victims without substantially increasing the burden borne by those who are otherwise unaffected. Some studies of disaster assistance programs have concluded that providing assistance through insurance can be more efficient and more equitable than providing it through other means. As early as 1980, we reported that the combination of insurance and mitigation measures can be a better means of fairly and efficiently providing federal disaster assistance than other forms of federal disaster assistance, such as loans and grants. Over the years the Congress has considered all-risk insurance programs, under which homeowners would purchase a single, comprehensive natural hazard policy and would be able to file claims for damage to their property whenever the damage was caused by any type of natural hazard. Such an insurance program—whether operated by the private insurance industry, the government, or both—would have to be structured and priced carefully to avoid increasing federal liabilities. In previous testimony, we expressed concerns about the ability of proposed primary insurance and reinsurance programs to fairly and efficiently spread insurance risks among policyholders, insurance companies, and the government. In summary, Mr. Chairman, the growth in the size and number of federally declared disasters in recent years is unprecedented and there is the potential for continuing increases in disaster assistance costs. We look forward to working with the Subcommittee as you consider the various proposals to help contain these costs. This concludes my prepared remarks. We will be pleased to respond to any questions that you or other Members of the Subcommittee might have. Disaster Assistance: Information on Federal Disaster Mitigation Efforts (GAO/T-RCED-98-67, Jan. 28, 1998). Disaster Assistance: Guidance Needed for FEMA’s “Fast Track” Housing Assistance Process (GAO/RCED-98-1, Oct. 17, 1997). Disaster Assistance: Improvements Needed in Determining Eligibility for Public Assistance (GAO/RCED-96-113, May 23, 1996). Natural Disaster Insurance: Federal Government’s Interests Insufficiently Protected Given Its Potential Financial Exposure (GAO/T-GGD-96-41, Dec. 5, 1995). Disaster Assistance: Information on Declarations for Urban and Rural Areas (GAO/RCED-95-242, Sept. 14, 1995). Disaster Assistance: Information on Expenditures and Proposals to Improve Effectiveness and Reduce Future Costs (GAO/T-RCED-95-140, Mar. 16, 1995). GAO Work on Disaster Assistance (GAO/RCED-94-293R, Aug. 31, 1994). Federal Disaster Insurance: Goals Are Good, But Insurance Programs Would Expose The Federal Government to Large Potential Losses (GAO/T-GGD-94-153, May 26, 1994). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO discussed several approaches for lowering the costs of federal disaster assistance, focusing on: (1) the components and magnitude of federal disaster assistance costs; and (2) approaches that could potentially lower those costs in the future. GAO noted that: (1) federal disaster assistance costs billions of dollars annually; (2) according to data compiled for the Senate Bipartisan Task Force on Funding Disaster Relief, federal agencies obligated about $119.7 billion (in constant 1993 dollars) for disaster assistance during fiscal years (FY) 1977 through 1993, the majority of which was for post-disaster assistance; (3) the Federal Emergency Management Agency accounted for about 22 percent of this amount, with the remainder spread across many federal agencies, including the Small Business Administration, the Army Corps of Engineers, and the Department of Agriculture; (4) the federal government provided assistance for an average of nearly 37 disasters or emergencies annually from FY 1977 through FY 1997; (5) the growth in disaster assistance costs in the 1990s has been attributed to a number of factors, including: (a) a sequence of unusually large and costly disasters, for which the federal government has occasionally borne a larger-than-usual share of the costs; (b) a general increase per year in the number of presidential disaster declarations; and (c) a gradual expansion of eligibility for assistance, through legislation and administrative decisions; (6) approaches for lowering federal disaster assistance costs include: (a) establishing more explicit or stringent criteria for providing federal disaster assistance; (b) emphasizing hazard mitigation through various incentives, and (c) relying more on insurance; (7) within these approaches, specific proposals vary; and (8) the extent to which implementation of these proposals would lower the costs of federal disaster assistance is unknown.
Under the SAFE Port Act, DNDO is required, among other functions, to develop, in coordination with other federal agencies, an enhanced Global Nuclear Detection Architecture. DNDO serves as the primary entity in the United States to develop programs and initiatives related to the Global Nuclear Detection Architecture, identify any gaps in it, and improve radiological and nuclear detection capabilities. DNDO also assists DHS agencies with implementing the domestic portion of the Global Nuclear Detection Architecture, including deployment of radiation detection equipment at ports of entry along the U.S. border. Accordingly, DNDO acquires and deploys RPMs and provides for associated scientific and technical expertise, with assistance from the Department of Energy’s national laboratories, including PNNL and Los Alamos National Laboratory. CBP operates the RPMs and maintains them after their first year of deployment. As of August 2016, CBP operates approximately 1,300 RPMs, which can detect radiation but cannot identify the type of material causing an alarm, as well as almost 2,700 handheld radiation detectors, which can identify the sources of radiation. RPMs are the primary means by which CBP scans cargo and vehicles at U.S. ports of entry for nuclear and radiological material. Before leaving a port of entry, most cargo containers and vehicles first travel through an RPM. (See fig. 1.) If an alarm is triggered, the cargo container or vehicle is directed to a secondary inspection area for further inspection and clearance by a CBP officer using a handheld radiation detector that can identify the source of the radiation. (See fig. 2.) RPM alarms can result from naturally occurring radioactive materials (NORM), which are often emitted from certain consumer and trade goods, such as ceramics, fertilizers, and granite tile. RPM alarms from NORM are termed “nuisance” alarms by DHS and require CBP officers to spend time determining that the source of the alarm is NORM and not nuclear or radiological threat materials before the cargo container or vehicle can leave the port. Although fewer than 2 percent of cargo containers have historically set off an RPM alarm, according to DHS, with more than 20 million cargo containers and more than 100 million vehicles passing through the nation’s ports annually, nuisance alarms account for hundreds of thousands of alarms per year. To reduce nuisance alarms and decrease the need for secondary scanning by CBP officers, in 2014 and 2015, CBP developed and deployed a new set of RPM alarm threshold settings, with support from DNDO and scientists at PNNL. This upgrade, which is referred to as revised operational settings, is implemented during calibration. It optimizes RPM effectiveness by tuning the threshold settings of individual RPMs to account for local background radiation and common NORM passing through the RPMs. These new threshold settings result in a similar sensitivity to materials that pose a threat but significantly reduce nuisance alarms from NORM. According to CBP, as of the end of fiscal year 2015, DNDO and CBP had upgraded RPMs at 28 seaports and 15 land border crossings, which has reduced nuisance alarms by more than 75 percent, on average, at these sites. Before fiscal year 2015, DHS had acquired 1,706 RPMs, including 384 from Ludlum Measurements, Inc. (Ludlum) and 1,322 from Leidos Holdings, Inc. (Leidos). The Ludlum RPMs were the first to be acquired and were deployed beginning in fiscal year 2003, and the Leidos RPMs were first acquired and deployed in fiscal year 2004. (See table 1.) Ludlum and Leidos RPMs have comparable designs—both use specialized detection panels made of plastic as well as helium-filled tubes—and provide similar technical capacity to detect nuclear and radiological materials. However, because of limitations in the design of the plastic panels in the Ludlum RPMs that DHS acquired, these RPMs cannot be upgraded with the revised operational settings and thus do not have the threat discrimination capabilities equal to the upgraded Leidos RPMs. Threat discrimination refers to the ability of the RPM to distinguish between radiation emitted from NORM and radiation emitted from radioactive materials that could be used in a nuclear device or a dirty bomb. As of August 5, 2016, DHS had 1,386 RPMs deployed—including 277 Ludlum RPMs and 1,109 Leidos RPMs—with the remaining RPMs in storage. Of the deployed RPMs, 885 were at northern and southern border crossings; 320 were at seaports; and 181 were at airports, ferry terminals, or other facilities. (See table 2.) According to DHS officials, the actual number of deployed RPMs varies on a day-to-day basis in response to port reconfigurations and expansions and because DHS periodically decommissions RPMs that are infrequently used. As of August 2016, 100 percent of cargo containers and vehicles entering land border crossings and nearly 100 percent of cargo containers passing through seaports are scanned by RPMs, according to DNDO. However, as we found in 2012, significant technological and logistical challenges exist for scanning cargo at airports and international rail ports of entry, and as of August 2016, some cargo containers that enter through these pathways are not being scanned by RPMs. Scanning of air cargo is primarily carried out with handheld radiation detectors, and scanning of international rail cargo is mainly conducted with such detectors and radiographic imaging systems. Radiographic imaging systems use gamma rays or X-rays to produce an image of cargo to detect anomalies, such as high-density material or hidden cargo, but they do not detect radioactivity. DHS’s assessment of its RPM fleet shifted over time, and as a result DHS has changed the focus of its RPM replacement strategy. More specifically, in fiscal years 2014 and 2015, DHS began planning to replace the full fleet based on a conservatively estimated 13-year service life. However, recent DNDO studies indicate the fleet can remain operational until at least 2030, with proactive maintenance and sufficient availability of spare parts, so DHS has refocused its strategy on selective replacements to improve efficiency. Before the RPMs began to reach the end of their estimated service life, DNDO commissioned a field study assessing how well the RPM systems were aging and a separate study specifically assessing the aging of the RPM’s key component—the plastic detection panels. These studies were published in 2011 and both concluded that the original 10-year RPM service life estimate should be updated. One of the studies concluded that the plastic detection panels could last up to 20 years—with 13 years as a conservative estimate of the panels’ life spans. The other study concluded that the panels would last between 15 and 20 years and that the RPMs could be sustained for an extended period with routine inspections, maintenance, and repairs as needed. According to DNDO officials, after these studies were published, DNDO began using a 13- year service life estimate for RPM life-cycle planning and budgeting purposes. Echoing one of the studies, a DNDO management official told us that the 13-year RPM service life was considered a conservative estimate that resulted in an acceptable level of risk to the program. The official further stated that although the 13-year estimate contained a great deal of uncertainty, DHS was trying to ensure that the necessary steps would be taken to keep the fleet fully functional. DNDO used the 13-year service life estimate for its RPM program planning and budget justifications throughout fiscal years 2014 and 2015, including in the following cases: DNDO’s RPM Program Management Plan for fiscal years 2014 through 2019, published in January 2014, stated that more than 500 of the then-deployed RPMs would reach the end of their service lives by 2019 and would require either refurbishment or replacement. The plan outlined DNDO’s efforts to extend the service life of the RPMs while working on a strategy for RPM replacement. DHS’s budget justification for fiscal year 2015 referred to RPMs exceeding the end of their estimated service lives as part of a discussion of RPM program needs. DHS’s June 2014 Global Nuclear Detection Architecture Strategic Plan of Investments highlighted a need for significant funding increases to replace RPMs as they reached the end of their estimated service lives. The plan projected significant decreases in RPM scanning coverage—the percentage of vehicles or cargo containers scanned—beginning in fiscal year 2016 based on the service life estimate, anticipated port expansions and reconfigurations, and projected budget levels. Specifically, the plan projected that RPM scanning coverage at seaports would fall from 100 percent in fiscal year 2014 to 69 percent in fiscal year 2019. For land border crossings, it projected a decrease from 98 percent scanning coverage to 39 percent over the same period. In its February 2015 update to its RPM Program Management Plan, DHS again emphasized the need for RPM replacements because RPMs would begin reaching the end of their estimated 13-year service lives. In addition, the plan stated that projected budget levels through fiscal year 2019 were sufficient for the program to begin to replace the RPMs reaching the end of their service life. The plan also highlighted the importance of efforts to extend RPM service life to retain the ability to perform required scanning. DHS’s budget justification for fiscal year 2016 referred to the 13-year RPM service life as it called for increased funding for replacements based on service life concerns. The budget justification stated that funding increases would address sustainability of aging RPMs and ensure compliance with the SAFE Port Act as DHS formulates a long-term strategy for the replacement of RPMs at the end of their life cycle. Furthermore, in October 2015, a senior DNDO official told us that DNDO was using 13 years as a conservative estimate for RPM service life. Fiscal year 2016 House and Senate appropriations committee reports discussed the need for RPM replacement, and specifically referenced RPM aging issues. The Consolidated Appropriations Act, 2016 increased funding for acquisition and deployment of radiological detection systems, including RPMs, to $113 million from $73 million in the previous year. This money can be used through fiscal year 2018. According to DHS data, as of August 2016, DHS’s RPM fleet remains almost 100 percent operational, even as almost 20 percent of the RPMs have reached the end of their original estimated 13-year service life and another 40 percent are within 2 years of that date. DNDO and PNNL officials we interviewed told us that, based on more recent studies and analysis, they believe the RPM fleet can last at least 20 years longer if it is properly maintained and spare parts remain available. Specifically, a January 2015 study by CBP’s Data Analysis Center that examined 8 years of RPM performance data concluded that the fleet is in acceptable condition and will operate effectively for several years. The study also noted that the functionality of the plastic detector panels does not degrade significantly over time. A May 2015 DHS study—carried out expressly to determine the best alternatives for replacing RPMs as they began to reach the end of their service lives—concluded that, assuming proper maintenance and parts availability continues, the concept of RPM life span is not useful, in part because there has been no measurable operational degradation of the RPM fleet. The study noted that CBP intends to continue to maintain RPMs at nearly 100 percent operability, resulting in no loss of functionality as they age. The study found no reason to believe that parts would become obsolete or unavailable, and noted that, under RPM maintenance contracts, parts are replaced or repaired as soon as they are observed to have failed. The study concluded that the RPMs could operate until 2030 at current levels of functionality, and stated that any decision to replace the systems should be predicated on the need for improved functionality rather than because of concerns over aging. Underlying this conclusion was an assumption that maintenance costs would not increase appreciably over the period evaluated. DNDO officials we interviewed explained that CBP has maintenance contracts with the RPM vendors that ensure the fleet can remain nearly 100 percent operational for many years. Furthermore, CBP and PNNL track maintenance data for trends in component failure rates that might indicate problems with the fleet or any significant maintenance cost increases, and officials told us that no troubling trends exist. In March 2016, Leidos confirmed to CBP in writing that it is capable of and committed to supplying parts for its RPMs until at least 2021 and expects to be able to do so through 2026. In addition, officials told us that they have not faced any barriers to replacing RPM components to date. For instance, CBP has replaced more than 1,100 computer interface boards and more than 1,200 vehicle presence sensors since 2007. CBP officials that we interviewed indicated that they have no reason to believe that parts will not remain available as long as the RPMs are still in use. Based on the new conclusions about RPM service life, in 2016 DNDO changed the focus of its strategy from replacing the RPM fleet because of aging to selective replacement of RPMs at specific sites to gain operational efficiencies, as discussed later in this report. This change is reflected in DHS’s budget justification for fiscal year 2017 in which DHS focused its funding request on the need to replace some upgraded Leidos RPMs to gain further operational efficiencies. Consistent with the stated planning assumptions, DNDO has not used funds received in fiscal year 2016 for RPM acquisitions. DHS plans to replace legacy RPMs at selected ports of entry with RPMs that have greater threat discrimination capabilities to gain operational efficiencies and reduce labor needs while continuing to meet detection requirements. Specifically, from fiscal year 2016 through fiscal year 2018, DHS is planning to replace more than 120 Ludlum RPMs at northern U.S. land border crossings with upgraded Leidos RPMs from existing inventory. The Ludlum RPMs are among the oldest in the fleet, with most acquired in fiscal years 2002 and 2003. Replacing them with upgraded Leidos RPMs would allow for improved threat discrimination—an RPM’s ability to distinguish between radiation emitted from NORM and radiation emitted from materials that pose a threat—which, according to DHS officials we interviewed, is expected to minimize CBP officer time spent responding to nuisance alarms. DHS therefore expects to be able to redirect some CBP officers to other critical law enforcement duties, such as interdiction of smuggled currency, illicit drugs, or other contraband, at border crossings where upgraded Leidos RPMs are installed. CBP officials told us in May 2016 that DHS will study the operations at each land border crossing before deciding on RPM replacement to ensure the benefits outweigh the costs. DNDO and CBP replaced Ludlum RPMs with upgraded Leidos RPMs at two sites—one seaport and one land border crossing—in fiscal year 2015. In total, 22 Ludlum RPMs were replaced with 20 upgraded Leidos RPMs. DNDO did not carry out a cost- benefit analysis before these RPM replacements. CBP officials explained that one site was the last remaining seaport where Ludlum systems were deployed. These officials explained that the second site resulted from a public-private partnership agreement initiated by a port authority to address operational concerns in which the private entity paid for the majority of labor costs associated with the replacement. DNDO provided us with documentation of a cost-benefit analysis for a third site, a land border crossing where CBP is considering replacing Ludlum RPMs with upgraded Leidos RPMs, and indicated that DNDO is in the final stages of completing an analysis addressing the remaining northern land border crossings. GAO Recommended That the Department of Homeland Security (DHS) Examine Use of Optimization Techniques to Maximize Radiation Portal Monitor (RPM) Potential In 2009, we examined DHS’s Domestic Nuclear Detection Office’s (DNDO) development and testing of a new type of RPM and, among other things, found that DNDO had not completed efforts to fine-tune the current fleet of RPMs to provide greater sensitivity to threat materials. We recommended that DNDO do so before spending billions of dollars acquiring new RPMs. (See GAO-09-655.) Beginning in 2014, DHS’s U.S. Customs and Border Protection (CBP) took action to upgrade some of its RPMs by optimizing RPM threshold settings. CBP estimates that the upgraded RPMs prevent more than 200,000 alarms from naturally occurring radioactive materials per year at the sites where the upgrades have been implemented, allowing for 88 CBP officers to be redirected to other high-priority mission areas, according to CBP officials. In addition, 70 percent of the sites where the upgrades were implemented reported safety improvements, according to an agency survey. These improvements were attributed to such things as reduced congestion and better traffic flow. During fiscal years 2018 through 2020, DHS is planning to replace upgraded Leidos RPMs at selected high-volume ports of entry with between 150 and 250 enhanced, commercially available RPMs that have even greater ability to discriminate between NORM and materials that pose a threat. According to DNDO and CBP officials, the improved threat discrimination offered by these new, enhanced RPMs will further reduce nuisance alarms and may enable high-volume ports of entry to implement remote RPM operations. Under remote operations, RPM scanning lanes would be monitored from a centralized location at each port using video cameras and traffic control devices, with CBP officers dispatched to inspection areas only in response to an RPM alarm. The staff formerly stationed at each RPM scanning site would be reassigned to other mission needs within a port of entry. CBP officials told us that, although CBP has yet to make a final determination, implementing remote operations would require a reduction of nuisance alarms to about one or two alarms per lane per day on average. Currently, upgraded Leidos RPMs—which have reduced nuisance alarms by more than 75 percent, on average, across the sites with the upgrades—provide alarm levels under one per day at many sites, according to CBP data. However, the data indicate that some high- volume ports of entry have lanes with higher nuisance alarm rates. According to a DHS analysis, the new, enhanced RPMs will provide nuisance alarm levels up to 99 percent lower than the legacy RPMs without upgrades, which is expected to be low enough to implement remote operations at these high-volume sites. (See fig. 3.) According to DNDO and CBP officials, RPM acquisition decisions will be informed by several factors, including available budget levels, performance of the upgraded Leidos RPMs, performance of new, enhanced RPMs as they are deployed, and the status of port expansions and reconfigurations. DNDO and CBP have conducted, or are planning, studies of nuisance alarm rates at sites with upgraded Leidos RPMs and new, enhanced RPMs. For example, DNDO and CBP collaborated on a preliminary study of nuisance alarm rates at ports of entry where upgraded Leidos RPMs are operating. Officials told us that further studies are necessary before DNDO and CBP determine how many of these sites will need new, enhanced RPMs to achieve nuisance alarm rates low enough to implement remote operations. According to CBP officials, CBP is also planning to test remote operations using new, enhanced RPMs. Specifically, according to officials, CBP has begun planning a pilot project at a seaport in Savannah, Georgia, to demonstrate the feasibility of remote operations using a test lane outfitted with a new, enhanced RPM. In addition to RPMs deployed as replacements, DHS estimates that, over the next several years, it will need to deploy approximately 200 RPMs because of port expansions and reconfigurations across the country. According to DHS officials, some of these will be Leidos RPMs out of existing inventory and some will be newly acquired, enhanced RPMs, depending on the scanning requirements at each individual site and the availability of the Leidos RPMs in inventory. According to DHS data, the agency had 143 upgraded Leidos RPMs in its inventory as of September 30, 2015, as well as approximately 85 low-use or no-use RPMs (75 Leidos and 10 Ludlum) expected to be available. DHS inventory data indicate that this existing RPM inventory would be adequate to support the planned replacements at the northern land border and the added RPM deployments for port expansions and reconfigurations through the end of fiscal year 2017. According to DNDO, starting in fiscal year 2018, when DHS plans to begin acquiring the enhanced RPMs, the upgraded Leidos RPMs that the new, enhanced RPMs replace will be used for the northern land border replacements and the anticipated port expansions and reconfigurations. According to DNDO officials, DNDO is following the DHS acquisition directive as it plans for acquisition of the enhanced RPMs. This document directs DNDO to follow a four-phase acquisition life-cycle framework. The directive’s implementing instruction outlines a series of “acquisition decision events” that include certain milestones. For example, prior to approval of an acquisition, DNDO must, among other things, develop a mission need statement outlining the capability need or gap that the an analysis of alternatives that explores the alternatives for addressing the identified need; a life-cycle cost estimate for the assets being acquired; and a test and evaluation master plan outlining how the program will ensure that the acquisition will deliver the capabilities needed by the program. As of August 2016, DNDO has completed an analysis of alternatives for the planned RPM acquisition, and officials told us that other required documentation is in process or waiting for management approval. Officials stated that DNDO plans to issue a request for proposals to industry before the end of the current calendar year and that the initial acquisition of the enhanced RPMs is planned for fiscal year 2018. DNDO and CBP have also recognized that the enhanced RPMs may be suitable for scanning of international rail crossings. DNDO has identified the lack of RPM scanning of international rail crossings as a concern for at least a decade and as a Global Nuclear Detection Architecture capability gap since 2014. In 2007, Congress directed the Secretary of Homeland Security to develop a system to detect both undeclared passengers and contraband, with a primary focus on the detection of nuclear and radiological materials entering the United States by railroad. In 2012, DNDO carried out an analysis of alternatives to identify solutions for international rail RPM scanning. Agency officials have cited technological and logistical challenges as key factors preventing RPM scanning of rail cars crossing the border. Specifically, according to DHS officials, international rail traffic represents one of the most difficult challenges for radiation detection systems, in part because of the length of the trains (up to 2 miles), the distance required to stop moving trains, and the difficulties in separating individual cars for further examination. In addition, the gamma ray or X-ray scans used to detect rail cargo anomalies, such as high-density material or hidden cargo, can interfere with RPM scanning if the two are in close proximity, which can cause nuisance alarms from the RPMs. In June 2015, CBP conducted a successful cargo-scanning demonstration project at an international rail port of entry using systems that integrate X-rays and enhanced RPM scanning technologies. This demonstration project showed the feasibility of adding RPM scanning to international rail crossings. DNDO and CBP are jointly planning an acquisition of these integrated systems and plan to deploy them at international rail crossings beginning as early as fiscal year 2018. We provided a draft of this report to DHS for review and comment. DHS provided technical comments that we incorporated, as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees, the Secretary of Homeland Security, and other interested parties. In addition, this report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix I. Shelby S. Oakley, (202) 512-3841 or [email protected]. In addition to the contact named above, Ned Woodward (Assistant Director), Rodney Bacigalupo, Antoinette Capaccio, Michael Krafve, Cynthia Norris, Steven Putansu, and Kevin Tarmann made key contributions to this report.
Preventing terrorists from smuggling nuclear or radiological materials to carry out an attack in the United States is a top national priority. Terrorists could use these materials to make an improvised nuclear device that could cause hundreds of thousands of deaths and devastate buildings and other infrastructure. DHS's fleet of almost 1,400 RPMs helps secure the nation's borders by scanning incoming cargo and vehicles for radiological and nuclear materials. DHS began deploying RPMs to seaports and border crossings in fiscal year 2003. As RPMs began to approach the end of their expected 13-year service lives, DHS raised concerns over the sustainability of the fleet, the ability to maintain current scanning coverage, and the need for fleet recapitalization. GAO was asked to report on the sustainability of the RPM fleet. This report provides information on (1) DHS's assessment of the condition of its RPM fleet and how, if at all, that assessment has changed over time; and (2) DHS's plans for meeting detection requirements in the future. GAO reviewed agency documentation, analyzed data on RPM age and condition, and reviewed budget justifications. GAO interviewed DHS officials and officials from a national laboratory on the current status of the RPM fleet and DHS's plans for future RPM acquisitions. GAO is not making recommendations in this report. DHS provided technical comments on a draft of this report. These comments are incorporated as appropriate in the final report. The Department of Homeland Security's (DHS) assessment of its fleet of radiation portal monitors (RPM)—large, stationary radiation detectors through which vehicles and cargo containers pass at ports of entry—shifted over time and, as a result, DHS has changed the focus of its RPM replacement strategy. During fiscal years 2014 and 2015, as some RPMs began to reach the end of their estimated 13-year service life, DHS began planning for replacing the entire fleet of almost 1,400 RPMs. However, as of September 2016, the fleet remains nearly 100 percent operational and recent studies indicate that the fleet can remain operational until at least 2030 so long as proactive maintenance is carried out and RPM spare parts remain available. As a result, in 2016, DHS changed the focus of its RPM replacement strategy to selective replacement of RPMs—using existing RPMs that have been upgraded with new alarm threshold settings or purchasing enhanced, commercially available RPMs—to gain operational efficiencies and reduce labor requirements at some ports. During fiscal years 2016 through 2018, DHS plans to replace approximately 120 RPMs along the northern U.S. border with upgraded RPMs and, during fiscal years 2018 through 2020, plans to replace between 150 and 250 RPMs at select high-volume ports with enhanced, commercially available RPMs. Specifically, DHS plans to replace some legacy RPMs—those that cannot be upgraded with the new alarm thresholds—at northern U.S. land border crossings with RPMs from existing inventory that have been upgraded. This upgrade enables improved threat discrimination and minimizes “nuisance” alarms created by naturally occurring radioactive materials (NORM) in commonly shipped cargo such as ceramics, fertilizers, and granite tile. Improved discrimination between NORM and threat material will create efficiencies for the movement of cargo through ports and minimize time that DHS's Customs and Border Protection (CBP) officers spend adjudicating the nuisance alarms. DHS is also planning limited replacement of upgraded RPMs at select high-volume ports with enhanced, commercially available RPMs that offer nuisance alarm levels significantly lower than even the upgraded RPMs. Currently, upgraded RPMs at some high-volume ports do not reduce nuisance alarm rates enough to implement remote RPM operations—which allows CBP officers to carry out other duties at the ports when not responding to an RPM alarm—because of the high number of vehicles and cargo containers passing through the ports daily.
During the Cold War, the United States and the Soviet Union built a total of 27 nuclear reactors to produce weapons-grade plutonium for nuclear weapons. Although all nuclear reactors produce plutonium as a byproduct of their operation, plutonium production reactors are specifically designed to produce a concentrated isotope of plutonium that is more readily used in nuclear weapons. (See app. III for additional information about the plutonium production nuclear fuel cycle.) The United States constructed 14 plutonium production reactors of which only one, the N-reactor at Hanford, Washington, produced electricity in addition to weapons-grade plutonium. This reactor was shut down in 1987 for safety upgrades following the Chernobyl accident and never resumed operation. The United States had shut down all of its plutonium production reactors by 1989. The Soviet Union built 13 plutonium production reactors, and all but 3 have been shut down. (For a time line showing the history of these reactors and efforts to bring about their closure, see app. IV.) The three remaining reactors began operating between 1964 and 1968, and U.S. and Russian nuclear experts told us that these reactors are among the most dangerous in the world due to their age and poor design. In addition, the reactors lack safety features such as a containment structure, which is generally a steel- lined concrete, dome-like structure that serves as a barrier to the release of radioactive material during an accident. The lack of containment presents a greater risk for the two reactors at Seversk because, unlike the reactor at Zheleznogorsk, which is located inside a mountain, the Seversk reactors are above ground. Figure 1 shows the location of the Russian reactors. According to Russian officials in Seversk, the two reactors currently provide about 70 percent of the heat and electricity for the city’s residents. However, the reactors have the capacity to produce more heat and electricity than is needed to meet the demands of Seversk’s residents, and both heat and electricity have been sold to the nearby city of Tomsk since 1973. Officials in Zheleznogorsk told us that the reactor there provides 60 percent of the city’s heat and 98 percent of its electricity. The amounts of replacement heat and electricity that the United States and Russia agreed to in the March 2003 reactor shutdown agreement are less than what is currently provided by the reactors, but Russian officials from both cities told us the agreed upon amounts would be sufficient to meet their needs once the reactors are shut down. Commissioned in the mid-1960s, the three reactors have continued to operate although, according to Russian officials, they were originally designed to have an operating life of 20 years. Officials from Russia’s nuclear regulatory agency, Gosatomnadzor, told us that since the 1960s, there have been at least three serious accidents and several minor incidents at one of the Seversk reactors. For example, in 1966, a coolant pipe ruptured, resulting in the release of contaminants into the atmosphere near the reactor site. Subsequently, the same reactor experienced a partial meltdown that damaged part of the core. Finally, in 1999, the reactor experienced another serious incident when spent fuel was ejected onto the top of the reactor. Since the program was transferred from DOD to DOE in December 2002, DOE (1) has developed an overall program plan to manage the construction of the fossil fuel plants, (2) has selected two U.S. contractors to oversee work on the replacement fossil fuel plants, and (3) is working with its U.S. contractors to review design and construction plans for the plants. DOE plans to complete refurbishment of the plant in Seversk by 2008 and construction of the plant in Zheleznogorsk by 2011. However, U.S. and Russian officials expressed concern that the large number of U.S. and Russian organizations, 17, involved in the overall management of the program makes coordination difficult and has led to delays. Additionally, DOE and U.S. contractor officials told us that the primary Russian contractor, Rosatomstroi, has not previously worked with U.S. contractors on large-scale construction projects and currently lacks enough staff to effectively implement its part of the program, overseeing the Russian subcontractors, which could lead to delays. DOE has developed an overall management plan for its program that (1) emphasizes detailed project planning, (2) seeks to identify project risks, and (3) develops alternative strategies to reduce risks. The program management elements in DOE’s plan are detailed in DOE order 413.3, which the department uses for construction projects and the acquisition of capital assets in the United States. Under DOE order 413.3, the program will move through five critical decision points, the major stages of design and construction, upon the approval of DOE’s Deputy Secretary. These critical decisions are formal determinations that allow the project to proceed to the next phase and commit additional resources. Critical decisions are required during the planning and execution of a project, for example, before beginning conceptual design, before starting construction, and when beginning operations. (For more detailed information about DOE’s management plan, see app. V.) DOE has also selected two U.S. contractors to oversee work on the two plants. In mid-2003, DOE awarded contracts to (1) Washington Group International (WGI) to oversee Russia’s refurbishment of an existing fossil fuel plant at Seversk and (2) Raytheon Technical Services (Raytheon) to oversee Russia’s construction of a new fossil fuel plant at Zheleznogorsk. These contracts cover the preliminary design phase of the projects. DOE plans to evaluate the performance of both contractors at the conclusion of the preliminary design phase. According to DOE, an extension or new contract would be required to cover the final design phase, construction, and closeout phases. In addition, DOE employs the National Energy Technology Laboratory, a DOE national laboratory that has historically focused on the development of advanced technologies related to coal and natural gas, to accomplish various management support tasks. Finally, DOE, together with its contractors, is reviewing the detailed design and construction plans that Russian subcontractors are developing for the fossil fuel plants at Seversk and Zheleznogorsk. At Seversk, DOE plans to refurbish an existing fossil fuel plant, which was built in 1953. To meet the heat and electricity production levels specified in the March 2003 agreement, DOE plans to replace one boiler (boilers burn coal to produce heat and steam); upgrade the plant’s 12 existing boilers to improve their efficiency and performance; and replace three turbine-generators, which use the steam produced by the boilers to generate electricity. (See app. VI for more information about the operation of coal-fired power plants.) In addition, DOE plans to improve the infrastructure at the plant by, among other things, enhancing the coal-handling system and improving the plant’s water chemistry system. DOE plans to complete the refurbishment of the fossil fuel plant at Seversk by 2008. At Zheleznogorsk, DOE plans to construct a new fossil fuel power plant that is powered by coal to meet the heat and electricity production levels specified in the March 2003 agreement. This new plant is scheduled for completion in 2011. Since the plants are being built to Russian standards, DOE plans to use Russian environmental, safety, and health standards in the construction of the fossil fuel plants rather than U.S. standards. However, in addition to satisfying all Russian regulations, DOE’s contractors are responsible for identifying potential environmental concerns resulting from emissions at the plants and comparing the Russian environmental standards with applicable international standards. We identified 17 U.S. and Russian organizations that are participating in the program. In total, these organizations have a variety of roles and responsibilities, including setting policy and direction, providing technical assistance, and managing and overseeing the program. In addition, there are numerous Russian subcontractors who will be responsible for supplying, manufacturing, or installing equipment for the replacement fossil fuel plants. Specifically, in addition to DOE, the U.S. organizations participating in the program include the following: The National Nuclear Security Administration, a separately organized agency within DOE that oversees the program; Washington Group International is DOE’s primary integrating contractor for refurbishing the Seversk replacement fossil fuel plant; Raytheon Technical Services is DOE’s primary integrating contractor for building the Zheleznogorsk plant. Raytheon has subcontracted some of its work to the U.S. construction firm Fluor; The National Energy Technology Laboratory performs various management support tasks for DOE and has two primary subcontractors, Energy and Environmental Solutions and Concurrent Technologies Corporation, which provide management support to DOE’s program. Additionally, Concurrent Technologies Corporation subcontracts some work on the program to Parsons; and PNNL had been the lead contractor for DOE’s planned Nuclear Safety Upgrades Project. Though this project was cancelled in February 2004, PNNL will still have limited participation in developing a reactor shutdown plan. In addition to MINATOM, numerous Russian participants in the program include the following: Rosatomstroi, the primary Russian contractor working for MINATOM on building the replacement fossil fuel plants; Tvel-Finance supports WGI on the Seversk fossil fuel plant project and is a subcontractor to Rosatomstroi; The Siberian Chemical Combine in Seversk operates the two reactors there and owns the fossil fuel plant that DOE plans to refurbish; Tomsk Teploelectroproekt is a subcontractor to Rosatomstroi and is responsible for developing the refurbishment design for the replacement fossil fuel plant at Seversk; The Mining and Chemical Combine operates the reactor in The Experimental Design Bureau for Machine Building (OKBM) was involved in the development of many of DOE’s planned safety upgrades for the reactors and is involved in developing the reactor shutdown plan. Figure 2 shows the relationships between key program participants. DOE officials told us that the numerous organizations involved in managing this complex program makes coordination difficult and has led to delays. For example, at Zheleznogorsk, the acquisition of the proposed site to build the replacement fossil fuel plant was delayed for 9 months because a dispute over the value of the land among MINATOM; the Mining and Chemical Combine, which is responsible for operating the reactor; and a local Siberian power utility. Raytheon officials told us that the project experienced a “day-to-day” slippage while the land acquisition issue remained unresolved. To improve program management, DOE plans to hire a resident officer in charge of construction who will reside in Russia for the duration of the program. Specifically, the resident officer’s responsibilities will include (1) ensuring that contractual work is carried out, (2) providing daily reviews of contractor progress, (3) monitoring the quality of work being performed, and (4) assisting in early identification and resolution of construction problems. DOE and U.S. contractor officials also told us that the primary Russian contractor, Rosatomstroi, has not previously worked with U.S. contractors on large-scale construction projects and does not currently have staff to effectively implement its part of the program, which may lead to additional program delays. Rosatomstroi was created in 2002 and has a limited budget and little authority to make decisions on behalf of the Russian government without the approval of MINATOM. Because MINATOM designated Rosatomstroi as the primary Russian integrating contractor, DOE must rely on Rosatomstroi to manage Russia’s part of the program, which includes overseeing the numerous Russian subcontractors. Rosatomstroi officials told us in September 2003 that they had 8 employees dedicated to the program but that they plan to add about 40 additional staff as the Seversk and Zheleznogorsk fossil fuel plant projects progress from the design phase to construction. Officials from both U.S. contractors said that one of their most difficult initial tasks has been to mentor Rosatomstroi personnel on project management and Western business practices. WGI officials told us that this task has taken much-needed time away from other planning aspects of the Seversk project. For their part, Rosatomstroi officials expressed concern that DOE’s use of two U.S. integrating contractors to provide day-to-day project oversight is burdensome because it forces them to adapt to different management systems and reporting requirements. Despite their efforts to develop a sound management structure, DOE officials told us that successful program implementation ultimately depends on Russia’s commitment and cooperation. A recent assessment of DOE's program by the Office of Management and Budget (OMB) reinforces the need for Russia’s cooperation to improve the program’s chances for success. OMB pointed out that DOE must rely on Russia to create conditions that will not limit the effectiveness and efficiency of the program to shut down the reactors. Furthermore, OMB stated that Russia’s creation of these conditions is largely out of DOE's control and is a potential flaw in the structure of the program. However, a Department of State official told us that he believes Russia has every incentive to cooperate in the program because shutting down the reactors and obtaining replacement heat and electricity sources is in Russia’s interest. Final shutdown of Russia’s three plutonium production reactors is uncertain because DOE faces challenges in implementing its program. Perhaps the most important of these challenges is ensuring Russia’s commitment to key aspects of the program. Russia’s recent rejection of DOE’s initiatives to reduce the amount of plutonium being produced by the reactors and to improve the safety of the reactors prior to their shutdown raises serious questions about Russia’s commitment to the fundamental nonproliferation and safety goals of the program. A second challenge DOE faces is that the existing reactor shutdown agreement does not specify the steps needed to complete the shutdown of the reactors and the specific requirements that must be met to license and commission the replacement fossil fuel plants. Furthermore, the agreement contains shutdown dates that are not realistic. Finally, thousands of Russian nuclear workers who are currently employed at the reactors and related facilities will be displaced when the reactors are closed. Although DOE officials told us that a failure to find jobs for these workers could threaten the success of the program, DOE has not developed a plan to coordinate the shutdown of the reactors with other DOE and Department of State efforts designed to find employment for Russian nuclear workers. The main nonproliferation goal of DOE’s program is to stop Russia’s production of weapons-grade plutonium. Because closure of the reactors will not occur until the fossil fuel plants are built and suitable heat and electricity sources are provided, DOE and MINATOM discussed interim measures to reduce the amount of plutonium produced by the reactors before they are shut down, as well as measures to accelerate the reactors’ shutdown. According to DOE officials, Russia’s support for this initiative would have clearly signaled a commitment to the nonproliferation goal of the program. In July 2003, DOE and Russian officials identified three options to reduce the reactors’ output of plutonium while the replacement fossil fuel plants are being built: (1) extending the period during the summer when the reactors are shut down for maintenance and refueling, (2) shutting down one of the two reactors at Seversk once the refurbishment of the fossil fuel plant reaches an agreed-upon level of completion, and (3) shutting down the reactor at Zheleznogorsk before the fossil fuel plant is completed but after it is able to supply an adequate amount of heat to the city. DOE believed that pursuing all of the reduction options could reduce the amount of weapons-grade plutonium produced by the reactors before their planned shutdown dates by up to 25 percent, or one-third metric ton, annually. DOE officials told us that the first option, extending summer outage periods, held the greatest promise for reducing plutonium production at the earliest possible date, which DOE believed could occur in the summer of 2004. Russian reactor officials in Zheleznogorsk told us that extending summer outage periods would be the easiest option to reduce the production of plutonium. Because the initiative to reduce the production of plutonium is outside the scope of DOE’s program to build replacement fossil fuel plants, DOE obtained funding from the Department of State’s Nonproliferation and Disarmament Fund to support the estimated $380,000 cost of studying the three plutonium production reduction options. DOE also planned to solicit participation from other countries to help fund these efforts. In November 2003, the First Deputy Minister of MINATOM stated in a letter to DOE that Russia no longer wanted to explore the possibility of reducing the amount of plutonium produced while the reactors continue to operate and that pursuing such options could affect the time frame of closing the reactors. According to the letter, “ not find it worthwhile to waste efforts on a project for reducing plutonium production prior to the permanent shutdown of the reactors.” The letter also stated that Russia’s main objective was to shut down the reactors as soon as possible. In response to the letter, DOE is no longer pursuing extending summer outages at the reactors as an option for reducing the amount of plutonium produced. A Department of State official told us that Russia’s decision to reject this proposal was likely based on its security concerns about providing U.S. personnel with access to the reactors for the purpose of monitoring and verifying the reduced amount of plutonium that would be produced. In December 2003, MINATOM requested that DOE fund a study to examine the possibility of shutting down one of the Seversk reactors prior to the completion of the replacement fossil fuel plant. To achieve the early closure of one of the reactors, MINATOM proposed that the refurbishment of the Seversk plant could be accelerated through the advanced procurement of certain major components such as the boiler. However, unlike extending the summer outage periods, this option could not be implemented in mid-2004. As part of the reactor shutdown agreement, DOE pledged to improve the safe operation of the reactors; and to accomplish this goal, DOE planned to fund a $21 million effort, consisting of 28 safety upgrade projects—such as fire safety system improvements, enhancements to emergency electrical power systems, and risk assessments. DOE selected PNNL to oversee the installation of the safety projects. DOE’s original plan called for work on the upgrade projects, including design work and contracting activities, to take place during a 24-month period—beginning in mid-2003 and ending by mid-2005—in order to maximize the benefits of the safety enhancements before the reactors are shut down. (See app. VII for a summary of DOE’s planned safety upgrade projects.) However, the start of the program was delayed for several months because the United States and Russia were unable to agree on the amount of background information that Russia required of U.S. workers to submit for Russia’s national security review purposes before they would be granted access to the reactors. In February 2004, the failure to resolve this issue led MINATOM to reject DOE’s planned assistance to improve the safety of the reactors and instead to say it would undertake necessary safety improvements on its own. As a result, DOE officials told us they were canceling the safety upgrade project and are considering several options to transfer the remaining unspent project funds to other program areas, including accelerating the completion of the replacement fossil fuel plant at Zheleznogorsk. DOE’s Assistant Deputy Administrator, Office of International Nuclear Safety and Cooperation, told us that he was very pessimistic that Russia would perform the safety upgrades. Additionally, he noted that even if Russia decides to install the upgrades, they may not be of sufficient quality or quantity to reduce the risk posed by the reactors’ continued operation. A PNNL program official also expressed doubt that Russia would pursue upgrading the reactors. He noted that without DOE’s planned safety upgrades, the reactors would continue to deteriorate until they are finally shut down. None of the reactors would be licensed for operation in the United States or Western countries because they lack modern safety controls, and at least one reactor has experienced structural damage causing obstructions in the channels where control rods are inserted in case the reactor must be shut down in an emergency. The control rods are devices used to control the rate of nuclear reactions in a reactor. In the view of the PNNL official, it is likely that all three reactors have experienced such damage. The deteriorating safety conditions present a greater danger at the two Seversk reactors than at Zheleznogorsk, because unlike the Zheleznogorsk reactor, the reactors at Seversk are located above ground. Furthermore, one of the Seversk reactors has experienced multiple accidents, including one that resulted in the expulsion of fuel elements onto the top of the reactor in 1999. Based on our analysis, the reactors are showing the wear of having been run for a very long time at a high output. The danger that these reactors present is the risk of a catastrophic reactor failure—such as a loss of coolant accident—which would result in a fire expelling the highly enriched uranium fuel and its fission byproducts such as plutonium and strontium-90, all of which are highly toxic and carcinogenic. The danger from such a fire is that radioactive particles would be dispersed and breathed into the body, causing either kidney damage from particles of uranium or cancer from particles of strontium-90 and plutonium. (For our technical analysis of the safety problems posed by the reactors, see app. VIII.) Regardless of the safety condition of the reactors, Russian officials stated that they plan to run the reactors until replacement energy is provided to the residents of Seversk and Zheleznogorsk. Because winter temperatures in the region of the cities can reach -40 degrees Fahrenheit, officials from Gosatomnadzor told us that they would continue issuing operating licenses to the reactors each year unless a “calamity” occurred. Although the current agreement calls on Russia to shut down the reactors when the replacement fossil fuel plants produce a certain amount of heat and electricity, it does not specify what steps are needed to shut down the reactors; how long it will take to shut down the reactors; or the process for and time required to license and commission the replacement fossil fuel plants. DOE indicated that agreeing on these issues and developing a specific plan of action to complete the program is critical to success. As a result, DOE initiated discussions with Russia to develop a reactor shutdown plan that will detail the activities needed to shut down the reactors and commission the fossil fuel plants. Additionally, the reactor shutdown plan will analyze expenses associated with shutting down the reactors. Further, the current agreement contains shutdown dates that are unrealistic and do not reflect DOE’s planned completion dates for the replacement fossil fuel plants. Under the March 2003 agreement, the United States and Russia agreed that the two reactors in Seversk and the reactor in Zheleznogorsk would stop producing plutonium by December 31, 2005, and December 31, 2006, respectively. However, according to DOE, Department of State, and Russian officials, these dates are no longer realistic because DOE does not plan to complete the replacement fossil fuel plant in Seversk until 2008 or the plant in Zheleznogorsk until 2011. Russian officials have reiterated that they will not shut down the reactors until the agreed-upon replacement power and heat generating capacity are provided by the United States. DOE and Department of State officials told us that the current agreement would be amended to reflect DOE’s planned schedule for the completion of the fossil fuel plants once project designs are completed. Failure to secure specific agreement on these changes could put program success at risk as it has for other U.S. nonproliferation efforts. Specifically, in the past, some U.S. nonproliferation efforts that were dependent on Russian cooperation have been canceled or adversely affected in part because of a lack of specific agreements and coordination between relevant U.S. and Russian organizations. Notable examples include two large-scale construction projects in Russia that were managed by DOD under the Cooperative Threat Reduction (CTR) program—a facility to dispose of liquid propellant used to fuel Russian ballistic missiles at Krasnoyarsk and the Fissile Material Storage Facility at Mayak. In both cases, DOD did not secure specific provisions in the agreements that addressed all program risks to the projects. In 1993, DOD agreed to help Russia dispose of liquid propellant used to fuel Russian ballistic missiles and eventually agreed to finance the construction of a disposal facility. In February 2002, after $96 million had been spent on the project, DOD officials learned that Russia had used the liquid propellant in its space program but had failed to notify DOD. As a result, DOD canceled construction of the facility and terminated the project. The DOD Inspector General found that Russia used the rocket fuel without DOD’s knowledge because the agreements with Russia did not require it to provide the fuel to DOD for disposal and did not provide DOD with access rights over the fuel’s storage. In another case, the United States agreed to build a storage facility in Mayak, Russia, for fissile materials, including highly enriched uranium and plutonium. However, the agreement did not provide DOD with rights to verify the source of the fissile material to be stored in the facility, nor did it specify the amount or type of fissile material Russia was required to deposit in the facility. By July 2003, DOD had spent $372.8 million on fissile material containers and the design and construction of the facility. However, in July 2003, MINATOM notified DOD that Russia would store only 25 metric tons of plutonium at the facility, while converting its highly enriched uranium into low enriched uranium to sell to the United States for use in civilian nuclear power plants. As a result, only one-fourth of the facility’s storage capacity will be used. The DOD Inspector General concluded that for future CTR projects, implementing agreements should be negotiated that would “require Russia to provide the United States with all the necessary resources to assure that assistance is used for intended purposes.” As a result of congressional concern and in response to recommendations from the DOD Inspector General, the CTR program has taken several steps to protect the investment of U.S. funds and improve program oversight, including replacing good faith obligations from Russia with specific legal commitments before proceeding with any current or future CTR projects. DOE officials told us that worker transition issues at Seversk and Zheleznogorsk have the potential to undermine efforts to shut down the reactors and present major challenges for the program. In July 2002, Russia’s First Deputy Minister of Atomic Energy said that the most “acute” problem in downsizing Russia’s nuclear weapons complex was at Zheleznogorsk, where the closure of the reactor would lead to the loss of 5,000 to 7,000 jobs in a city where other employment opportunities are limited. He also predicted that the closure of the two reactors in Seversk would lead to the loss of 5,000 to 6,000 additional jobs. Russian officials from both Seversk and Zheleznogorsk told us that finding jobs for displaced workers is their highest priority. Although these officials recognize that Russia is primarily responsible for employing these workers, they are seeking assistance from the United States to help address this problem. Since many Russian nuclear workers have highly specialized experience manufacturing and processing weapons-grade nuclear material, their unemployment poses a significant proliferation risk because they might sell sensitive nuclear information to terrorists or countries of concern. Specifically, many nuclear workers in Seversk and Zheleznogorsk possess knowledge and skills in machining nuclear material and manufacturing nuclear weapons. Since 2001, Congress has appropriated about $40 million each year to support DOE’s efforts to assist Russia in finding employment for its displaced nuclear workers through the Russian Transition Initiatives (RTI) program. The RTI program is comprised of two nonproliferation programs, the Nuclear Cities Initiative (NCI), which currently has some projects in Zheleznogorsk, and the Initiatives for Proliferation Prevention (IPP), which has a few projects in both cities. Both the NCI and IPP programs seek to prevent the proliferation of nuclear weapons knowledge from unemployed Russian nuclear weapons scientists—a problem known as “brain drain.” As directed by the Congress, the NCI program works in 3 of Russia’s 10 closed nuclear cities: Snezhinsk, Sarov, and Zheleznogorsk. The IPP program can work in all of the closed nuclear cities. From 1999 to 2003, the NCI program spent about $15.7 million on 23 projects in Zheleznogorsk. During the same period the IPP program sponsored one project in Zheleznogorsk costing about $1.8 million and one project in Seversk that cost $1.2 million. However, NCI has not initiated any new projects since September 2003 because the government-to-government agreement guiding the program expired. The agreement has not been renewed because the United States and Russia have not agreed upon legal protections regarding liability claims that could be brought against the United States, its contractors, and their employees. DOE’s office that administers the reactor shutdown program (Office of International Nuclear Safety and Cooperation) and the DOE office that is responsible for the RTI program (Office of Nonproliferation and International Security) have begun to coordinate their efforts, which include attending regular meetings and planning for joint trips to the cities. However, as of April 2004, DOE had not developed a plan to formally coordinate the department’s program to facilitate the shutdown of the reactors with the ongoing DOE efforts to help Russia find employment for its displaced nuclear workers. DOE officials from both program offices told us they are starting to draft a joint action plan to address Russian workforce transition issues related to the shutdown of the plutonium production reactors. In addition, DOE is working with Swiss officials to organize an international conference to discuss potential employment projects at Seversk and Zheleznogorsk. Additionally, the United States and several other countries fund the International Science and Technology Center (ISTC) program. This program supports science centers in Russia and Ukraine and focuses on paying nuclear, chemical, and biological weapons scientists to conduct peaceful research in a variety of areas, such as developing new anticancer drugs, improving nuclear safety, and enhancing environmental cleanup techniques. The Department of State is responsible for implementing the program on behalf of the U.S. government and chairs an interagency group that conducts a policy review of all project proposals submitted for funding. As of March 2004, ISTC had three active projects in Seversk and Zheleznogorsk. According to DOE officials, DOE has not coordinated with the ISTC program on workforce issues related to the shutdown of the plutonium production reactors. They noted that DOE views the shutdown effort as a departmental initiative although DOE plans to seek support from other countries in its efforts to find employment opportunities for displaced workers. Department of State officials told us that clearer agreement on the problem and a coordinated U.S. government approach was needed before the ISTC could be used to address worker displacement issues at Seversk and Zheleznogorsk. They also stated that they are prepared to use the ISTC program in coordination with other U.S. efforts to address the problem. As of December 31, 2003, DOE had spent $7.8 million, about 4 percent of the funds available, to begin work on planning and developing the program. In addition, DOE officials told us that they expect the final cost of the program to be significantly higher than their initial estimate. DOE’s slow rate of spending on program activities has led to about $179.1 million in unobligated and unspent funds. Furthermore, the cost to build the replacement fossil fuel plants, which DOE had projected to be $466 million, is uncertain because the estimate is based on Russian cost projections that DOE has not yet validated. According to DOE officials, the actual construction costs for the plants are likely to be significantly higher than the original estimate, possibly as much as $1 billion. DOE and its contractors are currently revising the preliminary estimate to reflect changes in the projects’ schedule and scope. As of December 31, 2003, DOE had unobligated funds totaling $137.9 million and an additional $41.2 million that has been obligated, but not yet spent. Together, these funds represent DOE’s total carryover balance of $179.1 million, which represent about 96 percent of the funds available for the program. As table 1 shows, through December 31, 2003, DOE had received $186.9 million in funding for the program but had only spent about $7.8 million of these available funds to begin work on planning and developing the program. Specifically, DOE indicated that these funds were mainly spent on planning and developing the program and include travel, overhead, project administration, and document translation costs. According to DOE officials, three major factors account for DOE’s current carryover balances: After management of the program was transferred from DOD to DOE in December 2002, DOE received $74 million in unspent program funding from DOD. These funds were in addition to DOE’s appropriations for the program. According to DOE and U.S. industry officials, large-scale construction projects require “front end” funding because construction projects are executed over several years. DOE officials expect the program’s obligations and expenditures to increase significantly when the Seversk project moves from the design phase to the construction phase near the end of fiscal year 2004 and the Zheleznogorsk project moves into the construction phase in the second quarter of fiscal year 2005. Difficulties and unforeseen delays are frequently associated with doing work in Russia. Large carryover balances are not uncommon for DOE nonproliferation programs in Russia. In March 2003, DOE reported that its nuclear nonproliferation programs had a total carryover balance of almost $460 million. DOE indicated that the large carryover amounts were due to difficulties in negotiating and executing contracts with Russia and the multiyear nature of these programs. Despite the program’s large carryover balance, DOE has requested an additional $50.1 million for the program in fiscal year 2005. Specifically, the request includes $39.5 million for the Seversk fossil fuel plant construction, about $9.6 million for the Zheleznogorsk plant, and $1 million for technical support activities. In addition, DOE’s fiscal year 2005 budget projects the annual budget requests for fiscal years 2006 through 2009 to be between $56 million and $66.9 million. In April 2003, DOE estimated that it would cost $466 million to build the replacement fossil fuel plants. DOE estimated that the plant at Seversk would cost about $172 million and the Zheleznogorsk plant would cost approximately $295 million. However, because DOE’s estimates are based on Russian cost projections developed between 2000 and 2001, which DOE has not validated, the final cost to build the replacement fossil fuel plants is uncertain. According to DOE officials, revised cost estimates are currently being developed by DOE’s contractors and are likely to be significantly higher than the original estimate, possibly totaling as much as $1 billion. For example, the original estimate did not include the costs of U.S. and Russian integrating contractors. Several other factors contributing to the projected cost increase include the high rate of inflation in Russia, higher than expected Russian labor and overhead rates, and unanticipated problems with the design plans for both plants. For example, DOE officials told us that the initial cost estimates for the Seversk plant were based on an existing Russian design for the refurbishment, which DOE believed to be at an advanced stage. However, after DOE and WGI began examining the design documents, they found that much of the design was incomplete. As a result, Russian contractors will perform additional design work, which will contribute to increased project costs. As more of the design work is completed, refined overall cost and schedule estimates will be developed for the plants. According to DOE, firm cost estimates will be provided to the Congress by the end of 2004. DOE plans to fund the entire cost of the replacement fossil fuel plants, which will be based on a Russian design and constructed by Russian contractors. DOE, Department of State, and National Security Council (NSC) officials told us that the United States did not insist that Russia commit resources to building the plants when the March 2003 reactor shutdown agreement was signed. NSC and Department of State officials noted that the United States was concerned that Russia would not be able to fund its part of the effort, and it did not want the program to be subject to the unpredictability of the Russian budgetary process, which could delay the program. The Department of State official also noted that the U.S. government decided that the U.S. interest in pursuing the objective of the earliest possible shutdown of the reactors overrode its interest in a potentially fairer allocation of costs for building the replacement fossil fuel plants. DOE officials pointed out that Russia will be responsible for the maintenance and operation of the plants once they are completed and that Russia is sacrificing some electricity production capacity because the replacement fossil fuel plants will not produce as much electricity as the reactors. DOE considers these Russian efforts as “in-kind” contributions. Cost increases and schedule delays are not uncommon for U.S. nonproliferation programs in Russia. For example, the United States has had difficulties with past major construction projects in Russia, such as the Chemical Weapons Disposal Facility at Shchuch’ye, and many of these projects have experienced dramatic cost increases, significant delays, or other major setbacks. At Shchuch’ye, DOD is assisting Russia by building a chemical weapons destruction facility. As a result of changes in the project’s scope and other factors, the estimated cost for the project increased from about $750 million to about $1.04 billion. Congressional concern over Russia’s financial commitment to the project led to a congressional mandate that Russia commit at least $25 million annually toward its chemical weapons destruction activities. DOE’s effort to secure the shutdown of Russia’s three plutonium production reactors is a critical nonproliferation program because it seeks to eliminate the production of weapons-grade plutonium. However, implementing this complex and technically challenging program is becoming an increasingly risky undertaking for DOE. Some actions that Russia has taken raise serious questions about its commitment to the nonproliferation and safety-related goals of DOE’s program. We believe, as do some DOE officials, that Russia could have demonstrated good faith by reducing the amount of plutonium produced by the reactors in the period before they are shut down. This could have been accomplished by extending the amount of time the reactors are shut down for maintenance during the summer months—a proposal that Russian officials told us could be easily accomplished. However, Russia informed DOE that it had no interest in pursuing this opportunity. While Russia’s unwillingness to consider this proposal represents a setback, we believe that extending the summer outage periods for the reactors furthers U.S. nonproliferation objectives and would meet an important national security goal. In addition, DOE was willing to spend over $20 million to improve the safety of these reactors, which have been characterized as being among the most unsafe reactors operating today. In this case, Russia also rejected DOE’s planned assistance to improve the reactors’ safety and claims that it will make its own safety improvements. We believe that the continued operation of these reactors, given their current age and condition, presents a significant and growing safety risk. Without implementing DOE’s proposed safety upgrades, the safety risks posed by the reactors will continue to increase dramatically. Although the existing agreement between DOE and Russia’s Ministry of Atomic Energy governing the shutdown of Russia’s plutonium production reactors provides a general framework for cooperation, there are no guarantees that the reactors will be shut down within DOE’s projected time frames. Furthermore, the agreement does not specify what steps must be taken to shut down the reactors and what specific requirements must be met to certify the completion of the replacement fossil fuel plants. Without defining these steps and specific requirements, DOE will be unable to develop accurate estimates for the true scope and cost of its program or be able to determine more precisely when the reactors will be shut down. The history of U.S.-Russian nonproliferation activities has demonstrated that some well-intentioned programs have had limited success because the agreements governing them lacked specificity or oversight was inadequate. The lessons of the past should be carefully considered as DOE moves forward with its program. Furthermore, the existing time frames for shutting down the reactors reflected in the agreement are neither accurate nor achievable. DOE, Department of State, and Russian officials recognize that the shutdown dates in the agreement are unrealistic and will need to be revised to reflect DOE’s schedule for the completion of the fossil fuel plants. Because of the history of failed efforts to secure the reactors’ closure and the inability to achieve previously agreed upon shutdown dates for these reactors, we believe it would be in the best interests of the United States to revise the agreement in order to have increased assurances that the reactors will be permanently shut down. A major consequence of DOE’s program to assist Russia’s closure of the reactors will be the displacement of thousands of Russian nuclear workers who are currently employed at the reactors and related facilities. Many of these workers have received specialized training in the manufacture and reprocessing of weapons-grade nuclear material and could pose a serious proliferation risk if unemployed because they might sell sensitive nuclear information to terrorists or countries of concern. This looming problem, if left unaddressed, has the potential to undermine the program. Although DOE has started to coordinate the reactor shutdown program with the department’s other efforts to employ Russian nuclear workers— specifically the Russian Transition Initiatives—it has not developed a plan to coordinate these two nonproliferation programs. Moreover, there is no overall U.S. government strategy that would integrate the Department of State’s International Science and Technology Center program with DOE’s programs to employ Russian weapons scientists, particularly in the cities where the reactors will be shut down. A jointly planned effort could strengthen U.S. nonproliferation efforts by leveraging resources and expertise between these programs. Such a plan could also identify other options to support employment opportunities in the two cities, including seeking financial support from other countries. Estimated costs to construct the replacement fossil fuel plants are expected to increase dramatically. With the total cost for the program expected to be as much as $1 billion, DOE’s program has taken on greater financial risk and will require a more substantial investment of resources. Because the United States has agreed to fully fund the costs of the replacement plants, Russia has little incentive to control construction costs. Russia would be more likely to show fiscal restraint if it were responsible for funding a portion of the construction projects. In the final analysis, this program will provide Russia with significant capital assets that Russia would have had to finance itself if not for the assistance of the United States. Additionally, DOE’s approximately $179 million balance of unobligated and unspent program funding raises concerns, especially in light of the department’s request for an additional $50.1 million in fiscal year 2005. Although DOE officials believe that these carryover balances are justified, it is highly unlikely that DOE will be able to spend its entire available program funding by the end of fiscal year 2004 because construction at both plants is not expected to begin until at least fiscal year 2005. To help achieve important U.S. nonproliferation objectives, we recommend that the Secretary of Energy and the Administrator of the National Nuclear Security Administration continue efforts to reduce the amount of plutonium produced by the reactors as an interim measure before they are permanently shut down. Specifically, the Secretary and the Administrator should continue to pursue the option of extending summer outage periods at the reactors as a way to realize the immediate nonproliferation benefits of reduced plutonium production in Russia. To increase the chances for program success by clarifying the existing reactor shutdown agreement, we recommend that the Secretary of Energy, working with the Administrator of the National Nuclear Security Administration and Secretary of State, do the following: reach agreement with Russia on the steps that must be taken to permanently shut down the reactors and the specific requirements that must be met to complete the replacement fossil fuel plants; identify any additional costs that may surface as a result of refining the scope of work associated with shutting down the reactors and completing the replacement fossil fuel plants and revise cost and schedule estimates for the program accordingly; and amend the March 2003 reactor shutdown agreement as soon as practicable to accurately reflect DOE’s more realistic shutdown dates for Russia’s three plutonium production reactors. To maximize the benefits of related U.S. nonproliferation efforts, we recommend that the Secretary of Energy and the Administrator of the National Nuclear Security Administration do the following: Create a specific plan and take steps to formally coordinate DOE’s program to assist Russia’s closure of the three plutonium production reactors with the department’s efforts to find jobs for displaced Russian nuclear workers through the Russian Transition Initiatives. Such a plan should be coordinated with Russia and should include strategies for obtaining assistance from other countries in finding employment for these workers. Take the lead in developing a comprehensive plan that focuses on integrating U.S. efforts to employ Russian nuclear workers in the cities of Seversk and Zheleznogorsk. The plan should be developed in conjunction with the Secretary of State. Such a plan should consider ways to better ensure that future projects funded by DOE and the Department of State in Seversk and Zheleznogorsk are clearly focused on finding jobs for Russian workers who will be displaced once the plutonium production reactors and related facilities are closed. To help defray the escalating costs of DOE’s program to shut down Russia’s plutonium production reactors, we recommend that the Secretary of Energy and the Administrator of the National Nuclear Security Administration consider seeking financial support from Russia to construct the replacement fossil fuel plants. To the extent possible, these contributions should not be limited to in-kind contributions such as building materials, labor, or the value of land. To address concerns about large carryover balances of program funding, we recommend that the Secretary of Energy and the Administrator of the National Nuclear Security Administration monitor funding requirements to ensure that funds are obligated in a determine whether future funding requirements need to be reduced in light of the slow rate of spending to date on the program. We provided the Departments of Energy and State with draft copies of this report for their review and comments. DOE’s and State’s written comments are presented as appendixes IX and X, respectively. DOE’s National Nuclear Security Administration said the draft report provided a balanced evaluation of its program to shut down Russia’s three plutonium production reactors. DOE agreed to implement our recommendations. The Department of State agreed with all of our recommendations except one that DOE should consider seeking financial support from Russia to construct the replacement fossil fuel plants. DOE also expressed concern with our conclusion regarding this matter. Both agencies stated that relying on Russia to fund critical program elements would delay the program, something they did not want to risk. These concerns notwithstanding, we continue to believe that DOE should look for opportunities to have Russia fund a portion of the construction projects as a way to contain costs, which are expected to increase dramatically. DOE plans to pursue obtaining financial support from Russia provided that it does not delay the program. We agree with this approach. Both agencies also disagreed with our conclusion that Russia’s rejection of key initiatives to reduce the amount of plutonium produced by the reactors and to improve their safety before they are shut down signals Russia’s lack of commitment to the nonproliferation and safety goals of the program. Both agencies stated that Russia rejected these initiatives primarily due to its security concerns about granting U.S. officials access to the reactors. In our report, we recognized that Russia’s security concerns may have played a role in rejecting the extension of summer outages at the reactors as an option for reducing plutonium production. However, in a November 2003 letter from MINATOM to DOE, Russia did not cite security concerns as a reason for rejecting the proposal. In fact, as we noted in the report, MINATOM stated that “ not find it worthwhile to waste efforts on a project for reducing plutonium production prior to the permanent shutdown of the reactors.” Instead, MINATOM claimed that it wanted to focus on the earliest possible shutdown of the reactors. As we noted in our report, both U.S. and Russian officials told us that extending summer outages to reduce the current production of weapons-grade plutonium held great promise and would be an easy option to implement. Furthermore, in its comments, DOE stated that it was disappointed in Russia’s rejection of the proposal to study ways to reduce the amount of plutonium produced by the reactors as an interim step before they are shut down. Regardless of Russia’s basis for rejecting the proposal, it should be noted that the long-standing and ultimate U.S. goal of this program is to reduce and eliminate the production of weapons-grade plutonium in Russia as quickly as possible. From the U.S. perspective, shutting down these reactors is a major nonproliferation objective, and the United States is committing significant resources to this effort. Thus, it seems reasonable to us that Russia should reciprocate and show its commitment to the fundamental nonproliferation tenets of this program. Finally, although DOE and State objected to our characterization of the implications of Russia’s decision to reject key DOE initiatives, both agencies agreed with our recommendation that seeking summer outages as a way to reduce plutonium production should continue to be pursued. With regard to the reactors’ safety, we noted in the report that they are among the most unsafe in the world and that DOE was prepared to provide a substantial amount of assistance to improve their safety. Russia’s rejection of the assistance, regardless of the reasons, raises serious concerns about its commitment to ensuring the reactors’ safe operation until they can be shut down. As we noted in the report, DOE and national laboratory officials expressed doubt about whether Russia would perform sufficient safety upgrades on its own. State also objected to what it believed to be our conclusion that final shutdown of the reactors is uncertain because the reactor shutdown and implementing agreements are insufficiently clear regarding the steps to permanently and irreversibly shut down the reactors. We believe that State in its written response to our draft report has mischaracterized our conclusion. Specifically, our report cites the lack of clarity in the agreement as one of several challenges that DOE faces that could affect final shutdown. While State disagreed with our conclusion, it agreed with our recommendation that DOE should reach agreement with Russia on the steps that must be taken to shut down the reactors and the specific requirements needed to certify the completion of the fossil fuel plants. State also believes that we overstated the implications of the agreement’s lack of accurate shutdown dates. However, State acknowledged that the deadlines for reactor shutdown in the agreement are no longer consistent with current plans and agreed with our recommendation to revise the dates. State also disagreed with our conclusion that worker transition issues have the potential to undermine the program. However, as we noted in our report, Russian officials we spoke with considered the employment of displaced workers as their highest priority and DOE officials acknowledged this as a major concern. Furthermore, DOE and State agreed with our recommendations to address this problem. DOE and State also provided technical comments, which we incorporated in the report where appropriate. We are sending copies of this report to the Secretary of Energy; the Administrator, National Nuclear Security Administration; the Secretary of State; the Director, Office of Management and Budget; and interested congressional committees. We will also make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, I can be reached at 202-512-3841 or [email protected]. Major contributors to this report are included in appendix XI. This appendix provides information about the cities of Seversk and Zheleznogorsk, the two cities in Russia where the plutonium production reactors are located. Formerly known as Tomsk-7, the closed city of Seversk is located approximately 2,000 miles east of Moscow and about 9 miles northwest of Tomsk, a major industrial city in western Siberia. As of January 2000, the city had approximately 119,000 residents. In addition to plutonium production, a number of other nuclear related activities have been carried out in Seversk, including the fabrication of uranium and plutonium weapons components. Recently, Seversk was selected to be the site for a planned facility that will dispose of 34 metric tons of Russia’s weapons- grade plutonium by converting it into mixed oxide fuel. The planned conversion facility will be built with Department of Energy funding. Nonnuclear activities at the city include an oil refinery operation. Seversk is the location of the Siberian Chemical Combine, which is responsible for operating the plutonium production reactors. Construction of the Siberian Chemical Combine facilities began in 1949; and on July 26, 1953, the first output of enriched uranium-235 was produced. Since its inception, the Siberian Chemical Combine has housed the Siberian Atomic Power Station; a chemical separation plant; facilities for plutonium processing, blending, and pit fabrication; an enrichment plant; and nuclear waste management facilities. The first of the plutonium production reactors at Seversk came online in 1955, and by the 1970s five such reactors were operating at the site. Three of the reactors were shut down in the early 1990s. The city’s two remaining weapons-grade plutonium production reactors began operation in 1965 and 1968 and continue to provide heat and electricity to Seversk and the neighboring city of Tomsk. Currently, the Siberian Chemical Combine employs about 15,000 workers, most of whom are highly skilled nuclear experts. Zheleznogorsk is located approximately 2,500 miles east of Moscow and about 35 miles north of the city of Krasnoyarsk. As of early 2000, the city had a population of 103,000. Formerly known as Krasnoyarsk-26, the city was built to house the employees of the Mining and Chemical Combine, a complex engaged in producing and processing weapons-grade plutonium. Both the city and the Mining and Chemical Combine are located on the east bank of the Yenisei River in Siberia. In 1996, the residents of Zheleznogorsk voted to remain a closed city in an attempt to maintain the clean, village- like quality amidst harsher, more environmentally damaged towns. Since the end of the Cold War, the technical workforce in Zheleznogorsk has dropped; and the city is now in difficult financial straits. Zheleznogorsk has tried to diversify its economy through forays into satellite building and television assembly. The three plutonium production reactors operated by the Mining and Chemical Combine were built in a huge cavern approximately 250 meters beneath a mountain. Over 60,000 prisoners were forced to excavate the chambers containing the reactors when work began in 1950, but in 1953 over 100,000 military construction workers replaced these prisoners. Two of the reactors began operating in 1958 and 1961 but were both shut down in 1992. A third plutonium production reactor, active since 1964, still functions to provide heat and electricity to the city. The Mining and Chemical Combine currently employs about 9,500 workers who in addition to plutonium production are involved in other nuclear-related activities, including the stockpiling of plutonium. We performed our review of DOE’s Elimination of Weapons-Grade Plutonium Production program at DOE’s offices in Germantown, Maryland; DOE headquarters in Washington, D.C.; the Defense Threat Reduction Agency in Ft. Belvoir, Virginia; the Department of State (State) in Washington, D.C.; the National Security Council in Washington, D.C.; the Nuclear Regulatory Commission in Rockville, Maryland; and Moscow, Seversk, and Zheleznogorsk, Russia. To assess the progress of DOE’s recent efforts to shut down Russia’s three remaining plutonium production reactors, we reviewed documents and had discussions with officials from the Department of Defense (DOD); DOE; State; the National Security Council; the Nuclear Regulatory Commission; the Pacific Northwest National Laboratory (PNNL); the National Energy Technology Laboratory; the U.S. Trade and Development Agency; DOE’s U.S. contractors—Washington Group International (WGI) and Raytheon Technical Services (Raytheon); and a number of nongovernmental entities, including nonproliferation and fossil fuel experts. In September 2003, we visited Russia to interview Russian officials and to see the sites for the replacement fossil fuel plants DOE plans to fund. While in Moscow, we spoke with officials from the Ministry of Atomic Energy of the Russian Federation; Rosatomstroi; the Kurchatov Institute, a leading Russian nuclear design institute; and Gosatomnadzor, the Russian nuclear regulatory agency. These officials provided Russia’s views of DOE’s program to build replacement fossil fuel plants and its efforts to shut down the reactors. We visited Zheleznogorsk and spoke with officials from the Mining and Chemical Combine, the city government, the planned fossil fuel plant, and the operators of the reactor. We toured the site of the planned fossil fuel plant and observed the current condition of the buildings at the site. We visited Seversk and interviewed officials from the Siberian Chemical Combine, the city government, operators of the reactors, and operators of the existing fossil fuel plant that DOE plans to refurbish. We toured the site of the existing fossil fuel plant and observed its current condition. To assess DOE’s management of the program we examined documents from DOE and DOE’s U.S. contractors—WGI and Raytheon. We interviewed officials from DOE’s Office of Engineering and Construction Management and from the Elimination of Weapons-Grade Plutonium Production program. In addition, while in Russia, we obtained views on DOE’s management of the program from a number of Russian officials from the Ministry of Atomic Energy of the Russian Federation; Rosatomstroi; the Kurchatov Institute; Gosatomnadzor; the Mining and Chemical Combine; the Siberian Chemical Combine; and the city governments of Zheleznogorsk and Seversk. To identify challenges DOE faces in implementing its program, we examined documents from DOE, the Nuclear Regulatory Commission, PNNL, the National Energy Technology Laboratory, DOE’s U.S. contractors—WGI and Raytheon, and several nongovernmental entities including nonproliferation and fossil fuel experts. To describe the proposed upgrades DOE planned to fund to improve the safety of the reactors while replacement fossil fuel plants were being built, we reviewed documents from DOE, the Nuclear Regulatory Commission, and PNNL. We also interviewed nuclear safety officials from DOE, the Nuclear Regulatory Commission, the Department of State, and PNNL. To determine the amount of money spent on U.S. efforts to eliminate weapons-grade plutonium production in Russia prior to the program’s transfer from DOD to DOE in December 2002, we analyzed documents and spoke with officials from DOE, DOD, the Department of State’s Nonproliferation and Disarmament Fund, the U.S. Trade and Development Agency, PNNL, and the Nuclear Regulatory Commission. Dollar amounts for the historical spending on these efforts were adjusted to constant fiscal year 2003 dollars to reflect trends in inflation over time. Because they are being used for background purposes only, we did not assess the reliability of these historical data. To determine how much DOE had spent through December 31, 2003 on its efforts to eliminate weapons-grade plutonium production in Russia and DOE’s projected costs to implement the program, we reviewed DOE’s cost and schedule estimates for the replacement fossil fuel plants, interviewed appropriate agency officials, and posed a number questions to DOE to determine the reliability of the financial data provided to us. We determined that the data were sufficiently reliable for the purposes of this report based on work we performed to assure the data’s reliability. Specifically, we (1) met numerous times with program officials to discuss these data in detail; (2) obtained from key database officials responses to a series of questions focused on data reliability covering issues such as data entry access, internal control procedures, and the accuracy and completeness of the data; and (3) added follow-up questions whenever necessary. We conducted our review between June 2003 and April 2004 in accordance with generally accepted government auditing standards. Plutonium is a byproduct of the nuclear fuel cycle and is produced by all nuclear reactors. Weapons-grade plutonium, however, contains a high content of plutonium-239, which is the most suitable isotope for use in nuclear weapons. Plutonium of this type is formed in the Russian production reactors as a component of highly radioactive spent reactor fuel. Although at this point the plutonium is relatively protected against proliferation because it is diluted and surrounded by the highly radioactive spent fuel, it cannot be safely stored for long periods in this form at the “wet storage” areas at the reactors to preclude corrosion and cracking in the aluminum fuel cladding. The plutonium is taken to another facility where it is chemically separated from the spent fuel in an operation called “reprocessing.” There is also an optimal time to reprocess the spent nuclear fuel: reprocess too soon and the fuel is highly radioactive, reprocess too late and the fuel can contaminate the spent fuel pool. Although the reprocessed fuel requires containment and is easily incorporated into weapons, it is also relatively easier and less expensive to store than spent fuel. Figure 3 illustrates the plutonium production cycle. Figure 5 shows the project acquisition process and critical decision (CD) points used in the DOE order 413.3 program management structure, which DOE has adopted for the program. As figure 5 shows, the five CD points are: (1) CD-0, approve mission need; (2) CD-1, approve preliminary baseline range; (3) CD-2, approve performance baseline; (4) CD-3, approve start of construction; and (5) CD- 4, approve start of operation or project closeout. Figure 5 also shows the prerequisite documentation and project milestones, such as the acquisition and project execution plans, which must be provided before critical decision approval can be granted. DOE officials believe that this management approach will help improve program oversight and accountability. The fossil fuel plant construction projects at Seversk and Zheleznogorsk gained approval of mission need (CD-0) from the Deputy Secretary of Energy in December 2002. DOE officials told us that the Seversk project would proceed to CD-1 in April/May 2004 and to CD-2 near the end of fiscal year 2004. The Zheleznogorsk project is expected to move to CD-1 in August 2004 and to CD-2 in the second quarter of fiscal year 2005. This appendix describes how electricity and heat are produced by a coal- fired power plant. Although the plant described is not identical to ones that will be constructed in Russia, the description is generic and can generally be applied to all coal-fired power plants. Figure 6 shows how electricity is produced by a coal-fired power plant. Coal is pulverized into a fine powder as it leaves the coal bin. That powder is blown into a boiler where it is ignited. The walls of the boiler contain miles of tubing, through which water is circulated. Heat from the burning coal turns the water into steam. The steam passes through piping to a turbine. The steam is directed against blades of the turbine causing it to spin. The turbine shaft turns, rotating the generator, which creates electricity. After the steam is directed against the blades, it goes to a condenser beneath the turbine. Cool water in the condenser turns the steam back into water. The water is pumped back into the boiler tubes to be heated into steam again. Large fans blow air into the boiler to support the combustion of coal. Some of the air is directed to the pulverizer where it helps dry the coal and carry it to the boiler. Coal ash drops to the bottom of the boiler for disposal. Hot gases escape from inside the boiler. Impurities are removed from these gases through scrubbing systems before they are released through the stack. As part of the March 2003 reactor shutdown agreement signed by DOE and the Ministry of Atomic Energy of the Russian Federation, DOE pledged to improve the safe operation of Russia’s three remaining plutonium production reactors until they can be shut down. Prior to Russia’s decision to reject DOE assistance to improve the safety of the reactors, DOE had allocated $21 million to support 28 safety upgrade projects including fire safety system improvements, enhancements to emergency electrical power systems, and risk assessments. DOE planned to complete the safety upgrades within 24 months in order to improve the safety of the reactors during their remaining lifetime. To oversee the safety upgrade projects, DOE selected PNNL, which had managed prior efforts under DOD to modify the reactors’ cores and was thus familiar with the reactors and their design and safety problems. DOE selected the safety upgrade projects after determining that none of them would extend the operating life of the reactors. DOE chose the 28 projects out of a larger list of 40 projects that Russian reactor officials submitted. According to DOE and PNNL officials, some of the reactor upgrades that Russia initially proposed were rejected because they were potentially life extending or would require too much time to implement. For example, DOE rejected Russian upgrades to improve the primary coolant pipes of the reactors due to concerns that such improvements would be life extending. Table 2 provides information about each of DOE’s planned upgrades to improve the safety of Russia’s three plutonium production reactors. Russia’s three remaining weapons-grade plutonium production reactors are among the most dangerous reactors currently operating in the world. All three of the reactors were built using old designs derived from the original reactor run by Enrico Fermi in the 1940s. According to officials from MINATOM, the Russian nuclear regulatory agency Gosatomnadzor, and the Kurchatov Institute—the leading civilian nuclear research institute in Russia—the reactors must be shut down by 2010. However, the reactor managers at Seversk believed that the continuous repairs to the reactors over the years are increasing the operating life and that further safety upgrades could allow the reactors to operate until 2014. In our view, all three reactors are showing the wear of having been run for a very long time at very high output, and all have had accidents—some as recently as 5 years ago. The safety risks posed by these reactors are a function of three factors: (1) all three reactors have been running at a very high output, producing both high temperatures and high neutron flux (the number of neutrons passing through a sphere one square-centimeter in cross-section during a unit of time) for their entire lives; (2) all three reactors have run approximately twice as long as they were originally designed to operate; and (3) none of these reactors meet current reactor safety standards. The danger that these reactors present is the risk of a catastrophic reactor failure—such as a loss of coolant accident—which would result in a fire expelling the highly enriched uranium fuel and its fission byproducts such as plutonium and strontium-90, all of which are highly toxic and carcinogenic. The danger from such a fire is that radioactive particles would be dispersed and breathed into the body, causing either kidney damage from particles of uranium or cancer from particles of strontium-90 and plutonium. All three reactors are designed to run at rated power, which is the original power output level of a reactor in terms of temperature output (t), and electrical output (e). According to Gosatomnadzor, the rating for the reactors is 800 megawatts (t) each. A Gosatomnadzor official informed us that the reactors could run at 20 percent higher than their original rating, or at 960 megawatts (t), and that the reactors had run at an elevated level during a 20-year period. In our opinion, Gosatomnadzor’s estimates are probably conservative because, based on the amount of fuel that can be used by the reactors and the fuel type that has been used, each reactor has the ability to run at a power level up to 2,500 megawatts (t) and has been run at a power level of at least 2,000 megawatts. If these reactors were originally designed to run at 800 megawatts (t), then they may have run at three times their original design rating. This will definitely shorten the operating life of the reactors, which makes their continued operation risky. Russian officials that we met disagree over both the original life spans of the reactors and how much longer the reactors can be operated before the risk of a catastrophic failure becomes too high. According to officials from MINATOM, Gosatomnadzor, and the Kurchatov Institute, the original life span for each of the reactors was 25 years. However, according to the operators of the three Russian reactors, the original reactor life spans are 20 years. All three reactors have operated for approximately 40 years, or roughly twice as long as originally designed. Gosatomnadzor confirmed that the design life is a function of: (1) the graphite cladding—which forms the outer, protective layer of the fuel elements and (2) the steel containment that surrounds the core. If it is assumed that the reactors have operated at 2,500 megawatts (t) for 20 years, then the original design life of 20 years could be reduced by up to 10 years, since the super heating of the graphite and the high neutron flux at the core center will cause much higher degradation than if the reactors were run at their rated power levels for their entire lives. These two factors will make the integrity of both the graphite cladding and the steel containment highly questionable and will increase the risk of a catastrophic failure of the reactors. None of the reactors meets current Russian, U.S., or international safety standards because they lack modern safety controls and are therefore dependent on direct operator intervention for both monitoring and safety. MINATOM, Gosatomnadzor, and the Kurchatov Institute told us that the personnel working at the reactors pose a safety threat because the quality of the reactor staff is weakening due to attrition and old age. According to Gosatomnadzor, the average age of reactor workers is 50, and the reactors are experiencing an increased number of temporary emergency shutdowns due to operator error, not the technology itself. Conversely, reactor officials at Seversk were concerned that the attrition of older workers will result in the loss of the knowledge and the ability of people that have become familiar with the reactors over many years. Although the Russian organizations disagree over the cause of the increased rate of accidents, there is a consensus that workforce attrition will have an impact on the safe operation of the reactors because it is likely that the reactor workers are the first and last line of defense against reactor accidents. Jim Shafer, 202-512-3841. In addition to the individual listed above, R. Stockton Butler, Nancy Crothers, Glen Levis, Steve Rossman, and Keith Rhodes, GAO’s Chief Technologist, made key contributions to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
Russia's continued operation of three plutonium production reactors poses a serious proliferation threat. The Department of Energy's (DOE) Elimination of Weapons-Grade Plutonium Production program seeks to facilitate the reactors' closure by building or refurbishing replacement fossil fuel plants. This report (1) describes DOE's efforts to manage and implement the program, (2) assesses the challenges DOE faces in achieving its goal of shutting down the reactors, and (3) identifies DOE's current expenditures and projected program costs. DOE is financing and managing the construction of two fossil fuel plants in Russia that will replace the heat and electricity that will be lost with the shutdown of Russia's three plutonium production reactors. DOE (1) has developed an overall plan to manage its program, (2) has selected two U.S. contractors to oversee the construction of replacement fossil fuel plants, and (3) is working with its U.S. contractors to review specific design and construction plans for the plants. DOE officials expressed concern that the number of organizations, 17, involved in the program makes coordination difficult and has led to delays. Additionally, DOE and U.S. contractor officials said that the primary Russian contractor may not have adequate experience and currently lacks enough staff to implement its part of the program. Final shutdown of the reactors is uncertain because DOE faces a number of challenges in implementing its program, including (1) ensuring Russia's commitment to the nonproliferation and safety goals of the program, (2) clarifying the existing reactor shutdown agreement, and (3) working with Russia to find employment for thousands of Russian nuclear workers who will lose their jobs when the reactors are closed. Russia's rejection of DOE's proposals to reduce the amount of plutonium produced by the reactors and to improve the safety of the reactors before they are shut down raises serious questions about Russia's commitment to key program goals. Furthermore, the existing reactor shutdown agreement contains shutdown dates that do not reflect DOE's planned program schedule. Finally, the challenge of finding employment for Russian nuclear workers could undermine the program by creating the potential for Russia to continue operating the reactors longer than necessary to ensure jobs for the workers. DOE has not developed a plan to address this issue. As of December 31, 2003, DOE had spent $7.8 million--about 4 percent of available funds on planning and developing the program, including travel, overhead, project administration, and document translation costs. Regarding future program costs, DOE officials told us that they expect the projected costs to build the replacement fossil fuel plants to be significantly higher than their original estimate of $466 million, possibly as much as $1 billion.
Afghanistan is a unique country with different development, security, and infrastructure issues and needs than Iraq. As a result, CERP efforts in Afghanistan are frequently focused on development and construction whereas in Iraq the focus of CERP is reconstruction of neglected or damaged infrastructure. The program has evolved over time in terms of the cost and complexity of projects, and the number of projects costing more than $500,000 in Afghanistan has reportedly increased from 9 in fiscal year 2004 to 129 in fiscal year 2008. As the program has matured, projects have become more complex, evolving from building small-scale projects such as wells that cost several thousand dollars to a boys’ dormitory construction project that cost several hundred thousand dollars to building roads that cost several million dollars. For example, of the $486 million that DOD obligated on CERP projects in fiscal year 2008, about $281 million was for transportation, which was largely for roads. CJTF-101 guidance identifies the individuals authorized to approve CERP projects based on the estimated cost of the project (see table 1). As shown in the table, 90 percent of the CERP projects executed in Afghanistan in fiscal year 2008 cost $200,000 or less. Management and execution of the CERP program is the responsibility of officials at CJTF-101 headquarters, the brigades and the PRTs. CJTF-101 personnel include the CERP manager who has the primary day-to-day responsibility for the program, a staff attorney responsible for reviewing all projects with a value of $200,000 or more, and a resource manager responsible for, among other things, maintaining CERP training records and tracking CERP obligations and expenditures. In addition, CJTF-101 guidance assigns responsibilities to the various staff sections such as engineering, medical, and contracting when specific projects require it. For example, the command engineering section is tasked with reviewing construction projects over $200,000, including reviewing plans for construction and project quality-assurance plans, and with participating in the CERP review boards. Similarly, the command’s surgeon general is responsible for coordinating all plans for construction, refurbishment, or equipping of health facilities with the Afghanistan Minister of Health and evaluating all project nominations over $200,000 that relate directly to healthcare or the healthcare field. Brigade commanders are responsible for the overall execution of CERP in their areas of responsibilities and are tasked with a number of responsibilities including identifying and approving CERP projects, appointing project purchasing officers, and paying agents and ensuring that proper management, reporting, and fiscal controls are established to account for CERP funds. In addition, the brigade commander is responsible for ensuring that project purchasing officers and paying agents receive training and ensuring that all personnel comply with CERP guidance. Additional personnel in the brigade are tasked with specific day- to-day management of the CERP program for the brigade commander. Table 2 details the activities of key individuals tasked with executing and managing CERP at the brigade level. In addition to those tasked with day-to-day responsibility, others at the brigade have a role in the CERP process. For example, the brigade attorney is responsible for reviewing project nominations to ensure that they are legally sufficient and in compliance with CERP guidelines, and the brigade engineer is tasked with providing engineering expertise, including reviewing projects and assisting with oversight. DOD is statutorily required to provide Congress with quarterly reports on the source, allocation and use of CERP funds. The reports are compiled based on information about the projects that is entered by unit officials into the Combined Information Data Network Exchange, a classified DOD database that not only captures operations and intelligence information, but also tracks information on CERP projects such as project status, project start and completion date, and dollars committed, obligated, and disbursed. This database is the third database that DOD has used since 2006 to track CERP projects in Afghanistan. According to a military official, some historical data on past projects were lost during the transfer of this information from previous database systems. CERP information is now available in an unclassified format to members of PRTs and others who have access to a network that can be used to share sensitive but unclassified information. U. S. efforts to enhance Afghanistan’s development is costly and requires some complex projects, underscoring the need to effectively manage and oversee the CERP program, including effectively managing and overseeing contracting as well as contractor efforts. During our review, we identified problems with the availability of personnel to manage and oversee CERP, as well as the sufficiency of training on CERP. Although DOD has used CERP funds to construct roads, schools, and other projects that commanders believe have provided benefits to the Afghan people, DOD faces significant challenges in providing adequate management and oversight of CERP because of an insufficient number of trained personnel to execute and manage the program. We have frequently reported on several long-standing problems facing DOD as it uses contractors in contingency operations including inadequate numbers of trained management and oversight personnel. Our previous work has shown that high-performing organizations routinely use current, valid, and reliable data to make informed decisions about current and future workforce needs, including data on the appropriate number of employees, key competencies, and skill mix needed for mission accomplishment, and appropriate deployment of staff across the organization. DOD has not conducted a workforce assessment of CERP to identify how many military personnel are needed to effectively and efficiently execute and oversee the program. Rather, commanders determine how many personnel will manage and execute CERP. Personnel at all levels, including headquarters and unit personnel that we interviewed after they returned from Afghanistan or were in Afghanistan in November 2008, expressed a need for more personnel to perform CERP program management and oversight functions. Due to a lack of personnel, key duties such as performing headquarters staff assistance visits to help units to improve contracting procedures and site visits to monitor project status and contractor performance were either not performed or not consistently performed. At the headquarters level, at the time of our review, CJTF-101 had designated one person to manage the day-to-day operations of CERP. Among many other tasks outlined in the CJTF-101 CERP guidance, the CJTF-101 CERP manager was responsible for conducting training for PPOs and PAs, providing oversight of all projects, ensuring proper coordination for all projects with the government of Afghanistan, validating performance metrics, ensuring that all project information is updated monthly in the command’s electronic database and conducting staff assistance visits semiannually or as requested by brigades. Staff assistance visits are conducted to assist units by identifying any additional training or guidance that may be required to ensure consistency in program execution. According to documents we reviewed, staff assistance visits conducted in the past have uncovered problems with project documentation, adhering to project guidelines, and project tracking, among others. The CJTF-101 CERP manager we interviewed during our visit to Afghanistan stated that he spent most of his time managing the headquarters review process of projects costing more than $200,000 and was unable to carry out his full spectrum of responsibilities, including conducting staff assistance visits. After our November 2008 visit to Afghanistan, CJTF-101 added additional personnel to manage CERP on a full-time basis. Headquarters and brigade level personnel responsible for CERP also expressed a need for additional personnel at brigades to perform essential functions from program management to project execution. For example: CJTF-101 guidance assigns a number of responsibilities for executing CERP, including project monitoring and oversight, to military personnel; however, according to unit officials we spoke with, tasks such as completing project oversight and collecting metrics on completed projects are often not accomplished due to a lack of personnel. In a July 2008 memorandum to CENTCOM, the CJTF-101 commanding general noted that in some provinces, units have repositioned or are unable to do quality- assurance and quality-control checks due to competing missions and security risks. Furthermore, according to military officials from units that had deployed to Afghanistan, project oversight is frequently not provided because units lack the personnel needed to conduct site visits and ensure compliance with CERP contracts. For example, according to one CERP manager we spoke with, his unit was not able to provide oversight of 20 of the 27 CERP projects because it was often difficult to put together a team to conduct site visits due to competing demands for forces. Similarly, the competing demands for forces made it difficult for units to visit completed projects and determine the effectiveness of the projects as required by CERP guidance. CJTF-101 guidance also requires units to consult subject-matter experts, such as engineers, when required. However, military officials stated that there is a lack of subject-matter experts to consult on some projects. For example, military personnel stated that agriculture experts are needed to assist on agriculture projects. Moreover, more public health officials are needed. A commander from one task force stated that his soldiers were not qualified to monitor and assess clinics because they did not have the proper training. Furthermore, several officials we spoke with, including officials at the CJTF-101 headquarters, noted that they needed additional civil/military affairs personnel to do project assessments both before projects are selected to determine which projects would be most appropriate and after projects are completed to measure the effectiveness of those projects. We recently reported that the lack of subject-matter experts puts DOD at risk of being unable to identify and correct poor contractor performance, which could affect the cost, completion, and sustainability of CERP projects. According to DOD policy, members of the Department of Defense shall receive, to the maximum extent possible, timely and effective, individual, collective, and staff training, conducted in a safe manner, to enable performance to standard during operations. CERP familiarization wever, training may be provided to Army personnel before deployment; ho according to several Army officials, units frequently do not know who will be responsible for managing the CERP program until after they arrive in Afghanistan so task-specific training is generally not included in predeployment training. Others, such as PPOs, receive training after they arrived in Afghanistan. However, personnel assigned to manage and execute CERP had little or no training on their duties and responsibilities, and personnel we spoke with in Afghanistan and those who had recently returned from Afghanistan believed they needed more quality training in order to perform their missions effectively. For example: One of the attorneys responsible for reviewing and approving CERP projects received no CERP training before deploying. Unsure of how to interpret the guidance, the attorney sought clarification from higher headquarters, which delayed project approval. Personnel from a U.S. Marine Corps unit that deployed to Afghanistan reported that they received no training on CERP prior to deployment and believed that such training would have been helpful to ensure that projects they selected would provide long-term benefits to the population in their area of operation. Army training on CERP consisted of briefing slides that focused on the authorized and unauthorized uses of CERP but did not discuss how to complete specific CERP responsibilities such as project selection, developing a statement of work, selecting the appropriate contract type, or providing the appropriate types and levels of contract oversight. Additionally, according to officials from brigades we spoke with in Afghanistan, they received little or no training on their CERP responsibilities after arriving in-theater. Military officials from PRTs also noted that they received little training on CERP prior to deploying to Afghanistan and felt that additional training was needed so that they could more easily perform their CERP duties. In some cases, personnel told us that working with their predecessors during unit rotations provided them with sufficient training. However not all personnel have that opportunity. Our reports as well as recent reports from others have highlighted the difficulties associated with contracting in contingency operations particularly for those personnel with little contracting experience. DOD’s Financial Management Regulation allows contracting officers to delegate the authority to PPOs to obligate funds for CERP contracts for projects valued at less than $500,000. Additionally, PPOs are involved in other activities such as writing the statement of work for each project, ensuring that the project is completed to contract specifications, and completing contract close out. During our visit to Afghanistan, we observed PPO training provided by the principal assistant responsible for contracting in Afghanistan. The training consisted of a 1-hour briefing, which included a detailed discussion of CERP guidance but did not provide detailed information on the duties of the PPO. For example, according to CJTF-101 guidance, contracts are to be supported by accurate cost estimates; however, the PPO briefing does not provide training on how to develop these estimates. All of the contracting officers we spoke with believe that the training brief provided is insufficient and noted that unlike PPOs, who have less training but more authority under CERP, warranted contracting officers have at least 1 year of experience and are required to take a significant amount of classroom training before they are allowed to award any contracts. Moreover, some PPOs we spoke with stated that they needed more training. Military officials at both the brigade and CJTF-101 level told us that inadequate training has led to some common mistakes in CERP contracts and CERP project files. For example, officials from PRTs, brigades, and the CJTF-101 level noted that statements of work often are missing key contract clauses or include clauses that are not appropriate and require revision. A training document provided by the principal assistant responsible for contracting identified several important clauses that are commonly omitted by PPOs including termination clauses, progress schedule clauses, and supervision and quality control clauses. As we have reported in the past, poorly written contracts and statements of work can increase the department’s cost risk and could result in the department paying for projects that do not meet project goals or objectives. Additionally, several officials at CJTF-101 with responsibilities for CERP also noted that project packages sent to the headquarters for review were often incomplete or incorrect, thereby, slowing down the CERP project approval process and increasing the workload of the CERP staff at both the headquarters and unit level. For example, the CJTF-101 official responsible for reviewing all projects valued at $200,000 or more noted that most of the project packets he reviewed had to be returned to the brigades because the packets lacked key documents, signatures, or other required information. Finally, the lack of training affects the quality of the oversight provided and can increase the risk of fraud. To illustrate, the Principal Deputy Inspector General Department of Defense testified in February 2009, that contingency contracting, specifically the Commander’s Emergency Response Program, is highly vulnerable to fraud and corruption due to a lack of oversight. He went on to state “it would appear that even a small amount of contract training provided through command channels and some basic ground-level oversight that does not impinge on the CERP’s objective would lower the risk in this susceptible area.” DOD and USAID participate in various mechanisms to facilitate coordination, but lack information that would provide greater visibility on all U.S. government development projects in Afghanistan. Teams have been formed in Afghanistan that integrate U.S. government civilians and military personnel to enhance coordination among U.S. agencies executing development projects in Afghanistan. For example, for projects involving roads, DOD and USAID officials have set up working groups to coordinate road construction and both agencies agreed that coordination on roads was generally occurring. Additionally, a USAID member is part of the PRT and sits regularly with military colleagues to coordinate and plan programming, according to USAID officials. Those same officials stated that this has resulted in joint programming and unity of effort, marrying CERP and USAID resources. Military officials we spoke with from several brigades also stated that coordination with the PRTs was good. Further, a USAID representative is located at the CJTF-101 headquarters and acts as a liaison to help coordinate projects costing $200,000 or more. Also, in November 2008, the Integrated Civilian-Military Action Group which consists of representatives from the Department of State, USAID, and U.S. Forces-Afghanistan was established at the U.S. Embassy in Kabul, to help unify U.S. efforts in Afghanistan through coordinated planning and execution, according to a document provided by USAID. The role of the Integrated Civilian-Military Action Group, which is expected to meet every 3 weeks, is to establish priorities and identify roles and responsibilities for both long-term and short-term development. Any decisions made by this group are then presented to the Executive Working Group-a group of senior military, State Department, and USAID officials-for approval. According to USAID officials, the Executive Working Group is empowered by the participating organizations to engage in coordinated planning and execution, provide guidance that synchronizes civilian and military efforts, convene interagency groups as appropriate, monitor and assess implementation and impact of integrated efforts, and recommend course changes to achieve U.S. government goals in support of the Government of the Islamic Republic of Afghanistan and of achieving stability in Afghanistan. Despite these interagency teams, military and USAID officials lack a common database that would promote information sharing and facilitate greater visibility of all development projects in Afghanistan. At the time of our review, development projects in Afghanistan were not tracked in a single database that was accessible by all parties conducting development in the country. For example, the military uses a classified database— Combined Information Data Network Exchange—to track CERP projects and other information. In early 2009, USAID officials were granted access to an unclassified portion of this database, providing them with information on the military’s CERP projects including project title, project location, project description, and name of the unit executing the project, among other information. On the other hand, USAID officials use a database called GEOBASE to track their development projects, and there are a myriad of other databases used to track individual development efforts. USAID officials stated that they did not believe military officials had access to GEOBASE. However, in our 2008 review of Afghanistan road projects, we reported that there was a DOD requirement to provide CERP project information to USAID via the GEOBASE system to provide a common operating picture of reconstruction projects for U.S. funded efforts. We found that this was not being done for the CERP-funded road projects and recommended that DOD do so, to which DOD concurred. At the time of our review, the requirement to input CERP project information into that database was not included in the most recent version of the CJTF-101 standard operating procedure. In a memorandum to CENTCOM, the commanding general of CJTF-101 noted that data on various development projects in Afghanistan are maintained in a wide range of formats making CERP data the only reliable data for the PRTs. In January 2009, USAID initiated a project to develop a unified database to capture reliable and verified data for all development projects in Afghanistan and make it accessible to all agencies engaging in development activities in the country. The goal for the database is to create visibility of development projects for all entities executing projects in Afghanistan in a single place. However, plans are preliminary and a number of questions remain including how the database will be populated and how the database development will be funded. USAID officials told us that they have been coordinating with CJTF-101 civil affairs officials about the development of the database and plan to hold a meeting in April 2009 to discuss recommendations for its development and to obtain input about the database from other U.S. government agencies. While USAID officials have conducted some assessments for the development of the centralized database, as of yet no specific milestones have been established for when that database will be complete. Without clear goals and a method to judge the progress of this initiative it is unclear how long this project might take or if it will ever be completed. The expected surge in troops and expected increase in funding for Afghanistan heightens the need for an adequate number of trained personnel to execute and oversee CERP. With about $1 billion worth of CERP funds already spent to develop Afghanistan, it is crucial that individuals administering and executing the program are properly trained to manage all aspects of the program including management and oversight of the contractors used. If effective oversight is not conducted, DOD is at risk of being unable to verify the quality of contractor performance, track project status, or ensure that the program is being conducted in a manner consistent with guidance. Without such assurances, DOD runs the risk of wasting taxpayer dollars, squandering opportunities to positively influence the Afghan population and diminishing the effectiveness of a key program in the battle against extremist groups including the Taliban. Although coordination mechanisms are in place to help increase visibility, eliminate project redundancy, and maximize the return on U.S. investments, the U.S. government lacks an easily accessible mechanism to identify previous and ongoing development projects. Without a mechanism to improve the visibility of individual development projects, the U.S. government may not be in a position to fully leverage the resources available to develop Afghanistan and risks duplicating efforts and wasting taxpayer dollars. We recommend that the Secretary of Defense direct the commander of U.S. Central Command to evaluate workforce requirements and ensure adequate staff to administer establish training requirements for CERP personnel administering the program, to include specific information on how to complete their duties and responsibilities . We further recommend that the Secretary of Defense and Administrator of USAID; collaborate to create a centralized project-development database for use by U.S. government agencies in Afghanistan, including establishing specific milestones for its development and implementation. In written comments to a draft of this report, DOD partially concurred with two of our recommendations and concurred with one. These comments are reprinted in appendix II. DOD partially concurred with our recommendation to require U.S. Central Command to evaluate workforce requirements and ensure adequate staff to administer the Commander’s Emergency Response Program (CERP). DOD acknowledged the need to ensure adequate staff to administer CERP and noted that since our visit, U.S. Forces-Afghanistan had added personnel to manage the program on a full-time basis. Because of the actions already being taken, DOD believed that no further action is warranted at this time, but stated it would monitor the situation and respond as required. Although steps have been taken to improve management and oversight of CERP in Afghanistan, we still believe that CENTCOM should conduct a workforce assessment to identify the number of personnel needed to effectively manage and oversee the program. As we described in the report, in the absence of such an assessment, commanders determine how many personnel will manage and execute CERP. As commanders rotate in and out of Afghanistan, the number of people they assign to administer and oversee CERP could vary. Therefore, to ensure consistency, we continue to believe that CENTCOM, rather than individual commanders, should assess and determine the workforce needs for the program. DOD partially concurred with our recommendation to establish training requirements for CERP personnel administering the program to include specific information on how to complete their duties and responsibilities. DOD acknowledged the need for training for CERP personnel administering the program and stated that since our visit, U.S. Forces- Afghanistan has begun work on implementing instructions to enhance selection processes and training programs for personnel administering the program and handling funding. Based on these efforts, DOD believed that no further action is warranted at this time but said it would monitor the situation and respond as required. However, the efforts outlined by DOD appear to be focused on training after personnel arrive in Afghanistan. Because our work also identified limitations in training prior to deployment, we believe that additional action is required, on the part of CENTCOM, to fully implement our recommendation. DOD concurred with our recommendation to collaborate with USAID to create a centralized project-development database for use by U.S. government agencies in Afghanistan, including establishing specific milestones for its development and implementation. USAID officials were given an opportunity to comment on the draft report. However, officials stated that they had no comments on the draft. We are sending copies of this report to other interested congressional committees and the Secretary of Defense and Administrator of USAID. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions on the matters discussed in this report, please contact me at (202) 512-9619 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. To determine the extent to which the Department of Defense (DOD) has the capacity to provide adequate management and oversight of the CERP in Afghanistan, we reviewed guidance from DOD, Combined Joint Task Force-101 (CJTF-101), and Combined Joint Task Force-82 (CJTF-82) to identify roles and responsibilities of CERP personnel, how personnel are assigned to the CERP, the nature and extent of the workload related to managing and executing the CERP, and the training curriculum provided to familiarize personnel with the CERP. We traveled to Afghanistan and interviewed officials at higher command, including those responsible for the overall management of CERP at CJTF-101, as well as commanders, staff judge advocates, project purchasing officers, engineers, and CERP managers about how they administered, monitored, and provided oversight to the program, what training they received, and how personnel assigned to administer and manage the program were chosen. We also interviewed personnel at all levels to obtain their perspective on their ability to execute their assigned workload and sufficiency of training they received prior to deployment and upon arrival in Afghanistan and attended a training session that was provided to Project Purchasing Officers (PPO). Additionally, we interviewed officials at the Office of the Secretary of Defense (Comptroller) and the Office of the Assistant Secretary of the Army (Financial Management and Comptroller), as well as Marine Corps and Army units that had returned from Afghanistan about the type of management and oversight that exists for CERP and the quality of that oversight. We selected these units (1) based on Afghanistan deployment and redeployment dates; (2) to ensure that we obtained information from officials at the division, brigade, and Provincial Reconstruction Team (PRT) levels who had direct experience with CERP; and (3) because unit officials had not yet been transferred to other locations within the United States or abroad. In order to determine the extent to which commanders coordinate CERP projects with USAID, we reviewed and analyzed DOD, CJTF-101, and CJTF-82 guidance to determine what coordination, if any, was required. We also interviewed military officials at the headquarters, brigade, and PRT levels that had redeployed from Afghanistan between July 2008 and April 2009 to determine the extent of their coordination with USAID officials. We also met with USAID officials in Washington, D.C., as well as traveled to Afghanistan and interviewed officials at the CJTF-101 headquarters, brigade, PRT, as well as, USAID about their coordination efforts. We spoke with military officials about the database they use to track CERP projects-Combined Information Data Network Exchange (CIDNE)—and learned that some historical data on past projects was lost during the transfer of information from a previous database to CIDNE. However, the information is in the project files and had already been included in the quarterly reports to Congress. Therefore, we analyzed the reported obligations in the quarterly CERP reports to Congress for fiscal year 2004 to fiscal year 2008 and interviewed officials about information contained in the reports. Based on interviews with officials, we determined that these data are sufficiently reliable for the purpose of this report. United States Agency for International Development, Washington, D.C. United States Agency for International Development, Kabul, Afghanistan Department of State, Washington, D.C. We conducted this performance audit from July 2008 to April 2009 in accordance with generally accepted government accounting standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Carole Coffey, Assistant Director; Susan Ditto, Rodney Fair, Karen Nicole Harms, Ron La Due Lake, Marcus Oliver, and Sonja Ware made key contributions to this report. Defense Management: Actions Needed to Overcome Long-standing Challenges with Weapon Systems Acquisition and Service Contract Management. GAO-09-362T. Washington, D.C: February 12, 2009. Iraq and Afghanistan: Availability of Forces, Equipment, and Infrastructure Should Be Considered in Developing U.S. Strategy and Plans. GAO-09-380T. Washington, D.C: February 11, 2009. Provincial Reconstruction Teams in Afghanistan and Iraq. GAO-09-86R. Washington, D.C.: October 1, 2008. Military Operations: DOD Needs to Address Contract Oversight and Quality Assurance Issues for Contracts Used to Support Contingency Operations. GAO-08-1087. Washington, D.C.: September 26, 2008. Afghanistan Reconstruction: Progress Made in Constructing Roads, but Assessments for Determining Impact and a Sustainable Maintenance Program Are Needed. GAO-08-689. Washington, D.C.: July 8, 2008. Securing, Stabilizing, and Rebuilding Iraq: Progress Report: Some Gains Made, Updated Strategy Needed. GAO-08-1021T. Washington, D.C: July 23, 2008. Military Operations: Actions Needed to Better Guide Project Selection for Commander’s Emergency Response Program and Improve Oversight in Iraq. GAO-08-736R. Washington, D.C.: June 23, 2008. Stabilizing and Rebuilding Iraq: Actions Needed to Address Inadequate Accountability over U.S. Efforts and Investments. GAO-08-568T. Washington, D.C.: March 11, 2008. Defense Logistics: The Army Needs to Implement Effective Management and Oversight Plan for the Equipment Maintenance Contract in Kuwait. GAO-08-316R. Washington, D.C.: January 22, 2008. Stabilization and Reconstruction: Actions Needed to Improve Governmentwide Planning and Capabilities for Future Operations. GAO-08-228T. Washington, D.C.: October 30, 2007. Securing, Stabilizing, and Reconstructing Afghanistan. GAO-07-801SP. Washington, D.C.: May 24, 2007. Military Operations: The Department of Defense’s Use of Solatia and Condolence Payments in Iraq and Afghanistan. GAO-07-699. Washington, D.C: May 23, 2007. Military Operations: High-Level DOD Action Needed to Address Long- standing Problems with Management and Oversight of Contractors Supporting Deployed Forces. GAO-07-145. Washington, D.C: December 18, 2006. Rebuilding Iraq: More Comprehensive National Strategy Needed to Help Achieve U.S. Goals. GAO-06-788. Washington, D.C.: July 2006. Afghanistan Reconstruction: Despite Some Progress, Deteriorating Security and Other Obstacles Continue to Threaten Achievement of U.S. Goals. GAO-05-742. Washington, D.C.: July 28, 2005. Afghanistan Reconstruction: Deteriorating Security and Limited Resources Have Impeded Progress; Improvements in U.S. Strategy Needed. GAO-04-403. Washington, D.C.: June 2, 2004.
U.S. government agencies, including the Department of Defense (DOD) and the United States Agency for International Development (USAID) have spent billions of dollars to develop Afghanistan. From fiscal years 2004 to 2008, DOD has reported obligations of about $1 billion for its Commander's Emergency Response Program (CERP), which enables commanders to respond to urgent humanitarian and reconstruction needs. As troop levels increase, DOD officials expect the program to expand. Under the authority of the Comptroller General, GAO assessed DOD's (1) capacity to manage and oversee the CERP in Afghanistan and (2) coordination of projects with USAID. Accordingly, GAO interviewed DOD and USAID officials, and examined program documents to identify workload, staffing, training, and coordination requirements. In Afghanistan, GAO interviewed key military personnel on the sufficiency of training, and their ability to execute assigned duties. Although DOD has used CERP to fund projects that it believes significantly benefit the Afghan people, it faces significant challenges in providing adequate management and oversight because of an insufficient number of trained personnel. GAO has frequently reported that inadequate numbers of management and oversight personnel hinders DOD's use of contractors in contingency operations. GAO's work also shows that high-performing organizations use data to make informed decisions about current and future workforce needs. DOD has not conducted an overall workforce assessment to identify how many personnel are needed to effectively execute CERP. Rather, individual commanders determine how many personnel will manage and execute CERP. Personnel at all levels, including headquarters and unit personnel that GAO interviewed after they returned from Afghanistan or who were in Afghanistan in November 2008, expressed a need for more personnel to perform CERP program management and oversight functions. Due to a lack of personnel, key duties such as performing headquarters staff assistance visits to help units improve contracting procedures and visiting sites to monitor project status and contractor performance were either not performed or inconsistently performed. Per DOD policy, DOD personnel should receive timely and effective training to enable performance to standard during operations. However, key CERP personnel at headquarters, units, and provincial reconstruction teams received little or no training prior to deployment which commanders believed made it more difficult to properly execute and oversee the program. Also, most personnel responsible for awarding and overseeing CERP contracts valued at $500,000 or less received little or no training prior to deployment and, once deployed, received a 1-hour briefing, which did not provide detailed information on the individual's duties. As a result, frequent mistakes occurred, such as the omission of key clauses from contracts, which slowed the project approval process. As GAO has reported in the past, poorly written contracts and statements of work can increase DOD's cost risk and could result in payment for projects that do not meet project goals or objectives. While mechanisms exist to facilitate coordination, DOD and USAID lack information that would provide greater visibility on all U.S. government development projects. DOD and USAID generally coordinate projects at the headquarters and unit level as well as through military-led provincial reconstruction teams which include USAID representatives. In addition, in November 2008, USAID, DOD and the Department of State began participating in an interagency group composed of senior U.S. government civilians and DOD personnel in Afghanistan to enhance planning and coordination of development plans and related projects. However, complete project information is lacking, because DOD and USAID use different databases. USAID has been tasked to develop a common database and is coordinating with DOD to do so, but development is in the early stages and goals and milestones have not been established. Without clear goals and milestones, it is unclear how progress will be measured or when it will be completed
Several Interior agencies are responsible for carrying out the Secretary’s Indian trust responsibilities. These agencies include the Bureau of Indian Affairs (BIA) and its Office of Trust Responsibilities (OTR), which is responsible for resource management and land and lease ownership information; BIA’s 12 Area Offices and 85 Agency Offices; the Bureau of Land Management (BLM) and its lease inspection and enforcement functions; and the Minerals Management Service’s (MMS) Royalty Management Program, which collects and accounts for oil and gas royalties on Indian leases. In addition, an Office of the Special Trustee for American Indians was established by the American Indian Trust Fund Management Reform Act of 1994. This office, implemented by Secretarial Order in February 1996, has oversight responsibility over Indian trust fund and asset management programs in BIA, BLM, and MMS. The Order transferred BIA’s Office of Trust Funds Management (OTFM) to the Office of the Special Trustee for American Indians and gave the Special Trustee responsibility for the financial trust services performed at BIA’s Area and Agency Offices. At the end of fiscal year 1995, OTFM reported that Indian trust fund accounts totaled about $2.6 billion, including approximately $2.1 billion for about 1,500 tribal accounts and about $453 million for nearly 390,000 Individual Indian Money (IIM) accounts. The balances in the trust fund accounts have accumulated primarily from payments of claims; oil, gas, and coal royalties; land use agreements; and investment income. Fiscal year 1995 reported receipts to the trust accounts from these sources totaled about $1.9 billion, and disbursements from the trust accounts to tribes and individual Indians totaled about $1.7 billion. OTFM uses two primary systems to account for the Indian trust funds—an interim, core general ledger and investment system and BIA’s Integrated Resources Management System (IRMS). OTR’s realty office uses the Land Records Information System (LRIS) to record official Indian land and beneficial ownership information. BLM maintains a separate system for recording mineral lease and production information and MMS maintains separate royalty accounting and production information systems. Our assessment of BIA’s trust fund reconciliation and reporting to tribes is detailed in our May 1996 report, which covered our efforts to monitor BIA’s reconciliation project over the past 5 and one-half years. As you requested, we also assessed Interior’s trust fund management improvement initiatives. In order to do this, we contacted the Special Trustee for American Indians, OTFM officials, and OTR’s Land Records Officer for information on the status of their management improvement plans and initiatives. We also contacted tribal representatives for their views. We focused on Interior agency actions to address recommendations in our previous reports and testimonies and obtained information on new initiatives. BIA recently completed its tribal trust fund reconciliation project which involved a massive effort to locate supporting documentation and reconstruct historical trust fund transactions so that account balances could be validated. BIA provided a report package to each tribe on its reconciliation results in January, 1996. Interior’s prototype summary reconciliation report to tribes shows that BIA’s reconciliation contractor verified 218,531 of tribes’ noninvestment receipt and disbursement transactions that were recorded in the trust fund general ledger. However, despite over 5 years of effort and about $21 million in contracting fees, due to missing records, a total of $2.4 billion for 32,901 receipt and disbursement transactions recorded in the general ledger could not be traced to supporting documentation and only 10 percent of the leases selected for reconciliation could be verified. In addition, BIA’s reconciliation report package did not disclose known limitations in the scope and methodology used for the reconciliation process. For example, BIA did not disclose or discuss the procedures included in the reconciliation contract which were not performed or could not be completed. Also, BIA did not explain substantial changes in scope or procedures contained in contract modifications and issue papers, such as accounts and time periods that were not covered and alternative source documents used. Further, BIA did not disclose that the universe of leases was unknown or the extent to which substitutions were made to the lease sample originally selected for reconciliation. In order for the tribes to conclude on whether the reconciliation represents as full and complete an accounting as possible, it was important that BIA explain the limitations in reconciliation scope and methodology and the procedures specified under the original contract that were not performed or were not completed. At a February 1996 meeting in Albuquerque, New Mexico, where BIA and its reconciliation contractor summarized the reconciliation results, tribes raised questions about the adequacy and reliability of the reconciliation results. The American Indian Trust Fund Management Reform Act of 1994 required that the Secretary of the Interior report to the House Committee on Resources and the Senate Committee on Indian Affairs by May 31, 1996, including a description of the methodology used in reconciling trust fund accounts and the tribes’ conclusions as to whether the reconciliation represents as full and complete an accounting of their funds as possible. During BIA’s February 1996 meeting with tribes to discuss the reconciliation reports and results, several tribes stated that they would need significant time to review their reconciliation reports and the supporting documents. OTFM planned five regional meetings between March 1996 and July 1996 to serve as workshops to assist individual tribes in reviewing their reconciliation results. Because BIA has not yet held all of the scheduled meetings to discuss account holder issues and comments and many account holders have not communicated their acceptance or dispute of their reconciled account balances, the Secretary has provided an interim report on account holders’ communications through April 30, 1996. The Secretary plans to submit a final report on account holder attestations of their acceptance or dispute of their reconciled account balances by November 15, 1996. According to the Secretary’s May 31, 1996, report 3 tribes, including 2 tribes for which additional pilot reconciliation procedures were performed, have disputed their reconciled account balances; 2 tribes with nominal balances have accepted their reconciled account 275 tribes, including 3 tribes that had additional pilot reconciliation procedures performed, have not yet decided whether to accept or dispute their account balances. Tribal representatives have told us that they are still reviewing their reconciliation report packages and that they have a number of questions and concerns about the results. If Interior is not able to reach agreement with tribes on the reconciliation results, a legislated settlement process would prove useful in resolving disputes about account balances. Our March 1995 testimony suggested that the Congress consider establishing a legislated settlement process. Our September 1995 report provided draft settlement legislation for discussion purposes. The draft legislation would provide for a mediation process and, if mediation does not resolve disputes, a binding arbitration process. The proposed process draws on advice provided us by the Federal Mediation and Conciliation Service and the rules of the American Arbitration Association. Both of these organizations have extensive experience in the use of third party facilitators to provide alternative dispute resolution. The proposed process offers a number of benefits, including flexibility in presentation of evidence and, because the decision of the arbitrators would be binding and could not be appealed, a final resolution of the dispute. In addition, arbitration has generally been found to be less costly than litigation. BIA’s reconciliation project attempted to discover any discrepancies between its accounting information and historical transactions that occurred prior to fiscal year 1993. While it is important for the Congress to consider legislating a settlement process to resolve discrepancies in account balances, unless the deficiencies in Interior’s trust fund management that allowed those discrepancies to occur are corrected, such discrepancies could continue to occur, possibly leading to a need for future reconciliation and settlement efforts. Since 1991, our testimonies and reports on BIA’s efforts to reconcile trust fund accounts have recommended a number of corrective actions to help ensure that trust fund accounts are accurately maintained in the future. While OTFM and OTR have undertaken a number of corrective actions, progress has been slow, results have been limited, and further actions are needed. OTFM, Interior, and OTR have initiated several trust fund management improvements during the past 3 years. These include acquiring a cadre of experienced trust fund financial management staff; issuing trust fund IIM accounting procedures to BIA field offices, developing records management procedures manuals, and issuing a trust fund loss policy; implementing an interim, core general ledger and investment accounting system and performing daily cash reconciliations; studying IIM and subsidiary system issues; reinstating annual trust fund financial statement audits; and initiating improvements to the Land Records Information System. Our 1991 testimonies and June 1992 report identified a lack of trained and experienced trust fund financial management staff. Previous studies and audits by Interior’s Inspector General and public accounting firms also identified this problem. Our June 1992 report recommended that BIA prepare an organization and staffing analysis to determine appropriate roles, responsibilities, authorities, and training and supervisory needs as a basis for sound trust fund management. In response to our recommendation, in 1992, OTFM contracted for a staffing and workload analysis and developed an organization plan to address critical trust fund management functions. The appropriations committees approved OTFM’s 1994 reorganization plan. As of October 1995, OTFM had made significant progress in hiring qualified financial management and systems staff. However, during fiscal year 1996, 27 BIA personnel displaced by BIA’s reduction-in-force were reassigned to OTFM. This represents about one-third of OTFM’s on board staff. Some of these reassigned staff displaced OTFM staff, while others filled vacant positions that would otherwise have been filled through specialized hiring. As a result, OTFM will face the challenge of providing additional supervision and training for these reassigned staff while continuing to work with BIA’s Area and Agency Office trust accountants to monitor corrective actions and plan for additional improvements. Our April 1991 testimony identified a lack of consistent, written policies and procedures for trust fund management. We recommended that BIA develop policies and procedures to ensure that trust fund balances remain accurate once the accounts are reconciled. Our April 1994 testimonyreiterated this recommendation and further recommended that BIA initiate efforts to develop complete and consistent written trust fund management policies and procedures and place a priority on their issuance. BIA has not yet developed a comprehensive set of policies and procedures for trust fund management. However, OTFM developed two volumes of trust fund IIM accounting procedures for use by BIA’s Area and Agency Office trust fund accountants and provided them to BIA’s Area and Agency Offices during 1995. Also, during 1995, OTFM developed two records management manuals, which address file improvements and records disposition. Missing records were the primary reason that many trust fund accounts could not be reconciled during BIA’s recent reconciliation effort. In addition, OTFM is developing a records management implementation plan, including an automated records inventory system. In January 1992 and again in January 1994, we reported that BIA’s trust fund loss policy did not address the need for systems and procedures to prevent and detect losses, nor did it instruct BIA staff on how to resolve losses if they occurred. The policy did not address what constitutes sufficient documentation to establish the existence of a loss, and its definition of loss did not include interest that was earned but not credited to the appropriate account. Our January 1994 report suggested a number of improvements, such as articulating steps to detect, prevent, and resolve losses. OTFM addressed our suggestions and issued a revised trust fund loss policy in 1995. However, while OTFM has made progress in developing policies and procedures, OTFM officials told us that BIA’s Area and Agency Office trust accountants have not consistently implemented these policies and procedures. In addition to developing selected policies and procedures, OTFM officials told us that they began performing monthly reconciliations of the trust fund general ledger to Treasury records in fiscal year 1993 and that they work with BIA Area and Agency Offices to ensure that unreconciled amounts are properly resolved. OTFM officials also told us that they have had limited resources to monitor Agency Office reconciliation performance and assist BIA Agency Office personnel in resolving reconciliation discrepancies. While we have not reviewed this reconciliation process, it is expected that it would be reviewed in connection with recently reinstated trust fund financial statement audits. In addition, an OTFM official told us that a lack of resources has impeded OTFM’s performance of its quality assurance function, which was established to perform internal reviews to help ensure the quality of trust fund management across BIA offices. For example, according to the OTFM official, until recently, funds were not available to travel to Area and Agency Offices to determine whether the accounting desk procedures and trust fund loss policy have been properly implemented. Our June 1992 report recommended that BIA review its current systems as a basis for determining whether systems modifications will most efficiently bring about needed improvements or whether alternatives should be considered, including cross-servicing arrangements, contracting for automated data processing services, or new systems design and development. In response to our recommendation, OTFM explored commercially available off-the-shelf trust accounting systems and contracted for an interim, core general ledger and investment accounting system. OTFM made a number of other improvements related to implementing the interim, core trust accounting system. For example, OTFM obtained Office of the Comptroller of the Currency assistance to develop core general ledger and investment accounting system operating procedures; initiated direct deposit of collections to BIA Treasury accounts through the Automated Clearing House; initiated automated payment processing, including electronic certification, to facilitate direct deposit of receipts to tribal accounts; conducted a user survey and developed a systems user guide; established a help desk to assist system users by providing information on the new system, including a remote communication package for tribal dial-in capability; and provided system access to Area and Agency Offices and tribal personnel. While the new system has eliminated the need for manual reconciliations between the general ledger and investment system and facilitates reporting and account statement preparation, tribes and Indian groups have told us that the new account statements do not provide sufficient detail for them to understand their account activity. For example, they said that because principal and interest are combined in the account statements, it is difficult to determine interest earnings. They told us that the account statements also lack information on investment yields, duration to maturity, and adequate benchmarking. For tribes that have authority to spend interest earnings, but not principal amounts, this lack of detail presents accountability problems. Representatives of some tribes told us that they either have or plan to acquire systems to fill this information gap. OTFM is planning system enhancements to separately identify principal and interest earnings. However, additional enhancements would be needed to address investment management information needs. In January 1996, the Special Trustee formed a working group consisting of tribal representatives and members of allottee associations, which represent individual Indians; BIA and Office of the Special Trustee field office staff; and OTFM staff to address IIM and subsidiary accounting issues. In addition, OTFM has scheduled four consultation meetings with tribes and individual Indians between June and August 1996 to determine how best to provide customer services to IIM account holders. These groups will also consider ways to reduce the number of small, inactive IIM accounts. According to the Special Trustee, about 225,000 IIM accounts have balances of less than $10. In 1995, OTFM initiated a contract to resume audits of the trust fund financial statements. OTFM had not had a trust fund financial statement audit since 1990, pending completion of the trust fund account reconciliation project. The fiscal year 1995 audit is covering the trust fund Statement of Assets and Trust Fund Balances, and the fiscal year 1996 audit will cover the same statement and a Statement of Changes in Trust Fund Balances. In 1993, BIA’s Office of Trust Responsibility (OTR) initiated improvements to its Land Records Information System (LRIS). These improvements were to automate the chain-of-title function and result in more timely land ownership determinations. In September 1994, we reported that OTR had 2-year backlogs in ownership determinations and recordkeeping which could have a significant impact on the accuracy of trust fund accounting data. We recommended that BIA provide additional resources to reduce these backlogs, through temporary hiring or contracting, until the LRIS improvements could be completed. However, according to OTR’s Land Records Officer, the additional resources were not made available as a result of fiscal year 1995 and 1996 budget cuts. Instead, BIA eliminated 6 Land Title and Records Office positions in fiscal year 1995 and an additional 30 positions in BIA’s fiscal year 1996 reduction-in-force. As a result, OTR’s five Land Title and Records Offices and its four Title Service Offices now have a combined staff of 90 full-time equivalent (FTE) positions—compared with 126 staff on September 30, 1994—to work on the backlog in title ownership determinations and recordkeeping while also handling current ownership determination requests. While current OTR backlogs are somewhat less than in 1994, BIA’s Land Records Officer estimates that over 104 staff years of effort would be needed to eliminate the current backlog. However, because LRIS improvements are on hold, these backlogs are likely to grow. While BIA and OTFM have begun actions to address many of our past recommendations for management improvements, progress has been limited and additional improvements are needed to ensure that trust funds are accurately maintained in the future and the needs of the beneficiaries are well-served. For example, BIA’s IRMS subsidiary and IIM system may contain unverified and potentially incorrect information on land and lease ownership that some BIA offices may be using to distribute trust fund receipts to account holders. According to a BIA official, some of BIA’s Agency Office staff update IRMS ownership files based on unverified information they have developed because LRIS information is significantly out-of-date. Our September 1994 report stated that without administrative review and final determination and certification of ownerships, there is no assurance that the ownership information in BIA’s accounting system is accurate. Our report also stated that eliminating redundant systems would help to ensure that only official, certified data are used to distribute trust fund revenue to account holders. Although Interior formed a study team to develop an IIM subsidiary system plan, the team’s August 1995 report did not include a detailed systems plan. Further, BIA and OTFM have not yet performed an adequate user needs assessment; explored the costs and benefits of systems options and alternatives; or developed a systems architecture as a framework for integrating trust fund accounting, land and lease ownership, and other trust fund and asset management systems. However, even if OTR resolves its ownership determination and recordkeeping backlogs and OTFM acquires reliable IIM and subsidiary accounting systems, IIM accounting will continue to be problematic due to fractionated ownerships. Under current practices, fractionated ownerships, which result from inheritances, will continue to complicate ownership determinations, accounting, and reconciliation efforts because of the increasing number of ownership determinations and trust fund accounts that will be needed. Our April 1994 testimony stated that BIA lacked an accounts receivable system. Interior officials told us that developing an accounts receivable system would be problematic because BIA does not have a master lease file as a basis for determining its accounts receivable. As a result, BIA does not know the total number of leases that it is responsible for managing or whether it is collecting revenues from all active leases. BIA has not yet begun to plan for or develop a master lease file. In addition, BIA and OTFM have not developed a comprehensive set of trust fund management policies and procedures. Comprehensive written policies and procedures, if consistently implemented, would help to ensure proper trust fund accounting practices. Also, to encourage consistent implementation of policies and procedures, quality assurance reviews and audits are an important tool. In 1994, OTFM developed a plan to contract for investment custodian and advisor services. These initiatives were planned for implementation in fiscal year 1995. However, OTFM has delayed its contract solicitation for investment custodian services until the end of June 1996 and has only recently begun to develop a contract solicitation for investment advisors. OTFM officials told us that a lack of resources has caused them to delay contracting for these services. Since 1991, our testimonies and reports have called for Interior to develop a comprehensive strategic plan to guide trust fund management improvements across Interior agencies. We have criticized Interior’s past planning efforts as piecemeal corrective action plans which fell short of identifying the departmentwide improvements needed to ensure sound trust fund management. Our June 1992 and September 1994 reports and our April 1994 testimony recommended that Interior’s strategic plan address needed improvements across Interior agencies, including BIA, BLM, and MMS. We endorsed the American Indian Trust Fund Management Reform Act of 1994, which established a Special Trustee for American Indians reporting directly to the Secretary of the Interior. The act made the Special Trustee responsible for overseeing Indian trust fund management across these Interior agencies and required the Special Trustee to develop a comprehensive strategic plan for trust fund management. The Senate confirmed the appointment of the Special Trustee for American Indians in September 1995. In February 1996, the Special Trustee reported that the $447,000 provided for his office for fiscal year 1996 is insufficient to finance the development of a comprehensive strategic plan for trust fund financial management. Despite the funding limitations, using contractor assistance, the Special Trustee has prepared an initial assessment and strategic planning concept paper. However, the concept paper focuses on one potential system solution for addressing critical OTFM and BIA financial management information requirements and does not address other alternatives. It also does not address programs across Interior agencies or all needed improvements. In addition, the concept paper does not explain the rationale for many of the assumptions that support the detail for the $147 million estimate to implement the specified improvements. In contrast to the concept paper, a comprehensive strategic plan would reflect the requirements of the Department, BIA, BLM, MMS, OTFM, and other Interior agency Indian trust programs. It would also address the relationships of the strategic plans for each of these entities, including information resource management, policies and procedures, and automated systems. In addition, a comprehensive strategic plan would address various trust fund related systems options and alternatives and their associated costs and benefits. For example, the concept paper proposes acquiring new trust fund general ledger and subsidiary accounting systems but, unlike a strategic plan, it does not analyze the costs, benefits, advantages, and disadvantages of enhancing OTFM’s current core general ledger and investment system or contracting for services instead of acquiring new systems. Further, since 1993, OTR has been planning for LRIS upgrades, including automated chain-of-title, which would facilitate ownership determinations and recordkeeping. Because it is planned that LRIS will provide a BIA link to Interior’s core Automated Land Records Management System (ALMRS), a comprehensive strategic plan would need to consider the merits of LRIS in determining how trust ownership and accounting information needs can best be addressed. ALMRS is being developed by BLM at an estimated cost of $450 million. Because ALMRS and LRIS were costly to develop and they contain interrelated data, a comprehensive strategic plan would also need to consider the advantages and disadvantages of linking LRIS to the trust fund accounting system, as compared with acquiring a new land records and ownership system, in determining the best way to manage Indian trust funds and assets. The Special Trustee and OTFM Director told us that they currently lack the resources to adequately plan for and acquire needed trust fund system improvements. However, without accurate, up-to-date ownership and subsidiary accounting information, trust fund account statements will continue to be unreliable. The Special Trustee told us that due to limited resources and the need for timely solutions, he is considering ways to use changes in policies and procedures to deal with some trust fund management problems. Many of the problems identified in his concept paper are not strictly systems problems, and they do not necessarily require systems solutions. We agree that certain changes should be considered that would not require systems solutions. For example, centralizing management functions could help resolve the problems of inconsistent ownership determinations and inconsistent accounting practices. The centralization of some functions, such as handling trust fund collections through lock box payments to banks, could also result in management efficiencies. Similarly, ownership determination and recordkeeping backlogs might be better addressed by centralizing the five Land Title and Records Offices and using contractor assistance or temporary employees until system improvements are in place. Even with centralization of some functions, customer information and services could continue to be provided locally for customer convenience. Although OTFM made a massive attempt to reconcile tribal accounts, missing records and systems limitations made a full reconciliation impossible. Also, cost considerations and the potential for missing records made individual Indian account reconciliations impractical. A legislated settlement process could be used to resolve questions about tribal account balances. Three major factors—lack of comprehensive planning, lack of management commitment across the organization, and limited resources—have impeded Interior’s progress in correcting long-standing trust fund management problems. When the trust fund reconciliation project was initiated, it was envisioned that by the time it was completed, adequate organizational structures, staffing, systems, and policies and procedures would be in place to ensure that trust fund accounts were accurately maintained in the future. However, piecemeal planning and corrective actions continue, and Interior still lacks a departmentwide strategic plan to correct trust fund management problems. In addition, while it is critical that all parts of the organization are committed to supporting and implementing trust fund management improvement initiatives, some BIA field offices are continuing to follow improper and inconsistent accounting practices. Given the continuing difficulty in managing a trust program across approximately 60 BIA offices, it is important to consider streamlining options such as centralization of collections, accounting, and land title and recordkeeping functions. Finally, Interior and BIA officials told us that they lack the resources to implement many needed corrective actions. However, the development of a comprehensive strategic plan that addresses interrelated functions and systems, identifies costs and benefits of options and alternatives, and establishes realistic milestones is a necessary first step. A departmentwide plan would provide the basis for management and congressional decisions on requests for resources. Mr. Chairman, this concludes my statement. I would be glad to answer any questions that you or the Members of the Task Force might have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed the Department of the Interior's management of Indian trust funds. GAO noted that: (1) the Bureau of Indian Affairs (BIA) has completed its reconciliation of trust fund accounts, but the accounts could not be fully reconciled due to missing records and the lack of an audit trail; (2) the January 1996 BIA report to the tribes did not explain limitations in scope and methodologies used for the reconciliation process; (3) two tribes have accepted their reconciliation results, three tribes are disputing their results, and the remaining 275 tribes have not yet decided whether to accept or dispute their reconciliation results; (4) if Interior cannot resolve the tribes' concerns, the disputes can be resolved through a legislated settlement process; (5) Interior's trust fund management improvements will take several years to complete; (6) although BIA is replacing its inadequate management and accounting systems, it has not developed systems requirements to ensure that the new systems provide accurate information; (7) Interior has appointed a special trustee for Native Americans who has developed an outline of needed trust fund management improvements, but this outline needs to include various departmentwide options and alternatives and their associated costs and benefits to become a comprehensive strategic plan; and (8) resource constraints have limited Interior's ability to make trust fund management improvements.
Solar energy can be used to heat, cool, and power homes and businesses with a variety of technologies that convert sunlight into usable energy. Examples of solar energy technologies include photovoltaics, concentrated solar power, and solar hot water. Solar cells, also known as photovoltaic cells, convert sunlight directly into electricity. Photovoltaic technologies are used in a variety of applications. They can be found on residential and commercial rooftops to power homes and businesses; utility companies use them for large power stations, and they power space satellites, calculators, and watches. Concentrated solar power uses mirrors or lenses to concentrate sunlight and produce intense heat, which is used to generate electricity via a thermal energy conversion process; for example, by using concentrated sunlight to heat a fluid, boil water with the heated fluid, and channel the resulting steam through a turbine to produce electricity. Most concentrated solar power technologies are designed for utility-scale operations and are connected to the electricity-transmission system. Solar hot water technologies use a collector to absorb and transfer heat from the sun to water, which is stored in a tank until needed. Solar hot water systems can be found in residential and industrial buildings. Innovation in solar energy technology takes place across a spectrum of activities, which we refer to as technology advancement activities, and which include basic research, applied research, demonstration, and commercialization. For purposes of this report, we defined basic research to include efforts to explore and define scientific or engineering concepts or is conducted to investigate the nature of a subject without targeting any specific technology; applied research includes efforts to develop new scientific or engineering knowledge to create new and improved technologies; demonstration activities include efforts to operate new or improved technologies to collect information on their performance and assess readiness for widespread use; and commercialization efforts transition technologies to commercial applications by bridging the gap between research and demonstration activities and venture capital funding and marketing activities. Congressional Budget Office, Federal Financial Support for the Development and Production of Fuels and Energy Technologies (Washington, D.C.: March 2012). as a whole but not necessarily for the firms that invested in the activities. For example, basic research can create general scientific knowledge that is not itself subject to commercialization but that can lead to multiple applications that private companies can produce and sell. As activities get closer to the commercialization stage, the private sector may increase its support because its return on investment increases. We identified 65 solar-related initiatives with a variety of key characteristics at six federal agencies. Over half of the 65 initiatives supported solar projects exclusively; the remaining initiatives supported solar energy technologies in addition to other renewable energy technologies. The initiatives demonstrated a variety of key characteristics, including focusing on different types of solar technologies and supporting a range of technology advancement activities from basic research to commercialization, with an emphasis on applied research and demonstration activities. Additionally, the initiatives supported several types of funding recipients including universities, industry, nonprofit organizations, and federal labs and researchers, primarily through grants and contracts. Agency officials reported that they obligated around $2.6 billion for the solar projects in these initiatives in fiscal years 2010 and 2011. In fiscal years 2010 and 2011, six federal agencies—DOD, DOE, EPA, NASA, NSF, and USDA—undertook 65 initiatives that supported solar energy technology, at least in part. (See app. II for a full list of the initiatives). Of these initiatives, 35 of 65 (54 percent) supported solar projects exclusively and 30 (46 percent) also supported projects that were not solar. For example, in fiscal years 2010 and 2011, DOE’s Solar Energy Technologies Program—Photovoltaic Research and Development initiative, had 263 projects, all of which focused on solar energy. In contrast, in fiscal years 2010 and 2011, DOE’s Hydrogen and Fuel Research and Development initiative—which supports wind and other renewable sources that could be used to produce hydrogen—had 209 projects, 26 of which were solar projects. Although initiatives support solar energy technologies, in a given year, they might not support any solar projects. For example, NSF officials noted that the agency funds research across all fields and disciplines of science and engineering and that individual initiatives invite proposals for projects across a broad field of research, which includes solar-related research in addition to other renewable energy research. However, in any given year, NSF may not fund proposals that address solar energy because either no solar proposals were submitted or the submitted solar-related proposals were not deemed meritorious for funding based upon competitive, merit-based reviews. Although more than half of the agencies’ initiatives supported solar energy projects exclusively, the majority of projects supported by all 65 initiatives were not focused on solar. As shown in table 1, of the 4,996 total projects active in fiscal years 2010 and 2011 under the 65 initiatives, 1,506 (30 percent) were solar projects, and 3,490 (70 percent) were not solar projects. Agencies’ solar-related initiatives supported different types of solar energy technologies. According to agency officials responding to our questionnaire, 47 of the 65 initiatives supported photovoltaic technologies, and 18 supported concentrated solar power; some initiatives supported both of these technologies or other solar technologies. For example, NSF’s CHE-DMR-DMS Solar Energy Initiative (SOLAR) supports both photovoltaic and concentrated solar power technologies, including a project that is developing hybrid organic/inorganic materials to create ultra-low-cost photovoltaic devices and to advance solar concentrating technologies. These initiatives supported solar energy technologies through multiple technology advancement activities, ranging from basic research to commercialization. As shown in figure 1, five of the six agencies supported at least three of the four technology advancement activities we examined, and four of the six supported all four. Our analysis showed that of the 65 initiatives, 20 initiatives (31 percent) supported a single type of technology advancement activity; 45 of the initiatives (69 percent) supported more than one type of technology advancement activity; and 4 of those 45 initiatives (6 percent) supported all four. For example, NASA’s Solar Probe Plus Technology Development initiative—which tests the performance of solar cells in elevated temperature and radiation environments such as near the sun— supported applied research exclusively. In contrast, NASA’s Small Business Innovations Research/Small Business Technology Transfer Research initiative—which seeks high-technology companies to participate in government-sponsored research and development efforts critical to NASA’s mission—supported all four technology advancement activities. The technology advancement activities supported by the initiatives were applied research (47 initiatives), demonstration (41 initiatives), basic research (27 initiatives), and commercialization (17 initiatives). The initiatives supported these technology advancement activities by providing funding to four types of recipients: universities, industry, nonprofit organizations, and federal laboratories and researchers. The initiatives most often supported universities and industry. In many cases, initiatives provided funding to more than one type of recipient. Specifically, our analysis showed that of the 65 initiatives, 23 of the initiatives (35 percent) supported one type of recipient; 21 of the initiatives (32 percent) provided funding to at least two types of recipients; 17 initiatives (26 percent) supported three types; and 4 initiatives (6 percent) supported all four. In two cases, agency officials reported that their initiatives supported “other” types of recipients, which included college students and military installations. Initiatives often supported a variety of recipient types, but individual agencies more often supported one or two types. As shown in figure 2, DOE’s initiatives most often supported federal laboratories and researchers; DOD’s most often supported industry recipients; NASA’s supported federal laboratories and industry equally; NSF’s supported universities exclusively. For example, NASA’s Small Business Innovations Research/Small Business Technology Transfer Research initiative provided contracts to industry to participate in government- sponsored research and development for advanced photovoltaic technologies to improve efficiency and reliability of solar power for space exploration missions. NSF’s Emerging Frontiers in Research and Innovation initiative provided grants to universities for, among other purposes, promoting breakthroughs in computational tools and intelligent systems for large-scale energy storage suitable for renewable energy sources such as solar energy. Federal solar-related initiatives provided funding to these recipients through multiple mechanisms, often using more than one mechanism per initiative. As shown in figure 3, the initiatives primarily used grants and contracts. Of the 65 initiatives, 27 awarded grants, and 36 awarded contracts; many awarded both. Agency officials also reported funding solar projects via cooperative agreements, loans, and other mechanisms. Agency officials reported that the 65 initiatives as a group used multiple funding mechanisms, but we found that individual agencies tended to use primarily one or two funding mechanisms. For example, USDA exclusively used grants, while DOD tended to use contracts. DOE reported using grants and cooperative agreements almost equally. For example, DOE’s Solar ADEPT initiative, an acronym for “Solar Agile Delivery of Electrical Power Technology,” awards cooperative agreements to universities, industry, nonprofit organizations, and federal laboratories and researchers. Through a cooperative agreement, the initiative supported a project at the University of Colorado at Boulder that is developing advanced power conversion components that can be integrated into individual solar panels to improve energy yields. According to the project description, the power conversion devices will be designed for use on any type of solar panel. The University of Colorado at Boulder is partnering with industry and DOE’s National Renewable Energy Laboratory on this project. In responding to our questionnaire, officials from the six agencies reported that they obligated around $2.6 billion for the 1,506 solar projects in fiscal years 2010 and 2011. These obligations data represented a mix of actual obligations and estimates. Actual obligations were provided for both years for 51 of 65 initiatives. Officials provided estimated obligations for 12 initiatives for at least 1 of the 2 years, and officials from another 2 initiatives were unable to provide any obligations data. Those officials who provided estimates or were unable to provide obligations data noted that the accuracy or the availability of the obligations data was limited because isolating the solar activities from the overall initiative obligations can be difficult. (See app. II for a full list of the initiatives and their related obligations.) As shown in table 2, over 90 percent of the funds (about $2.3 billion of $2.6 billion) were obligated by DOE. The majority of DOE’s obligations (approximately $1.7 billion) were obligated as credit subsidy costs—the government’s estimated net long-term cost, in present value terms, of the loans—as part of Title XVII Section 1705 Loan Guarantee Program from funds appropriated by Congress under the American Recovery and Reinvestment Act (Recovery Act). Even excluding the Loan Guarantee Program funds, DOE obligated $661 million, which is more than was obligated by the other five agencies combined. The 65 solar-related initiatives are fragmented across six agencies and many overlap to some degree, but agency officials reported a number of coordination activities to avoid duplication. We found that many initiatives overlapped in the key characteristics of technology advancement activities, types of technologies, types of funding recipients, or broad goals; however, these areas of overlap do not necessarily lead to duplication of efforts because the initiatives sometimes differ in meaningful ways or leverage the efforts of other initiatives, and we did not find clear evidence of duplication among initiatives. Officials from most initiatives reported that they engage in a variety of coordination activities with other solar-related initiatives, at times specifically to avoid duplication. The 65 solar-related initiatives are fragmented in that they are implemented by various offices across six agencies and address the same broad area of national need. In March 2011, we reported that fragmentation has the potential to result in duplication of resources. However, such fragmentation is, by itself, not an indication that unnecessary duplication of efforts or activities exists. For example, in our March 2011 report, we stated that there can be advantages to having multiple federal agencies involved in a broad area of national need— agencies can tailor initiatives to suit their specific missions and needs, among other things. In particular, DOD is able to focus its efforts on solar energy technologies that serve its energy security mission, among other things, and NASA is able to focus its efforts on solar energy technologies that aid in aeronautics and space exploration, among other things. As table 3 illustrates, we found that many initiatives overlap because they support similar technology advancement activities and types of funding recipients. For example, initiatives that support basic and applied research most often fund universities, and those initiatives that support demonstration and commercialization activities most often fund industry. Almost all of the initiatives overlapped to some degree with at least one other initiative in that they support broadly similar technology advancement activities, types of technologies, and eligible funding recipients. Twenty-seven initiatives support applied research for photovoltaic technologies by universities. For example, NSF’s Engineering Research Center for Quantum Energy and Sustainable Solar Technologies at Arizona State University pursues cost-competitive photovoltaic technologies with sustained market growth. The Air Force’s Space Propulsion and Power Generation Research initiative partners with various universities to develop improved methods for powering spacecraft, including solar cell technologies. Sixteen initiatives support demonstration activities focused on photovoltaic technologies by federal laboratories and researchers. For example, NASA’s High-Efficiency Space Power Systems initiative conducts activities at NASA’s Glenn Research Center to develop technologies to provide low cost and abundant power for deep space missions, such as highly reliable solar arrays, to enable a crewed mission to explore a near Earth asteroid. DOE’s Solar Energy Technologies Program (SETP), which includes the Photovoltaic Research and Development initiative, works with national laboratories such as the National Renewable Energy Laboratory, Sandia National Laboratories, Brookhaven National Laboratory, and Oak Ridge Laboratory to advance a variety of photovoltaic technologies to enable solar energy to be as cost competitive as traditional energy sources by 2015. Seven initiatives supported applied research on concentrated solar power technologies by industry. For example, DOE’s SETP Concentrated Solar Power subprogram, which focuses on reducing the cost of and increasing the use of solar power in the United States, funded a company to develop the hard coat on reflective mirrors that is now being used in concentrated solar power applications. In addition, DOD’s Fast Access Spacecraft Testbed Program, which concluded in March 2011, funded industry to demonstrate a suite of critical technologies including high-efficiency solar cells, sunlight concentrating arrays, large deployable structures, and ultra- lightweight solar arrays. Additionally, 40 of the 65 initiatives overlap with at least one other initiative in that they supported similar broad goals, types of technologies, and technology advancement activities. Providing lightweight, portable energy sources. Officials from several initiatives within DOD reported that their initiatives supported demonstration activities with the broad goal of providing lightweight, portable energy sources for military applications. For example, the goal of the Department of the Army’s Basic Solar Power Generation Research initiative is to determine the feasibility and applicability of lightweight flexible, foldable solar panels for remote site power generation in tactical battlefield applications. Similarly, the goal of the Office of the Secretary of Defense’s Engineered Bio-Molecular Nano- Devices and Systems initiative is to provide a low-cost, lightweight, portable photovoltaic device to reduce the footprint and logistical burden on the warfighter. Artificial photosynthesis. Several initiatives at DOE and NSF reported having the broad goal of supporting artificial photosynthesis, which converts sunlight, carbon dioxide, and water into a fuel, such as hydrogen. For example, one of DOE’s Energy Innovation Hubs, the Fuels from Sunlight Hub, supports basic research to develop an artificial photosynthesis system with the specific goals of (1) understanding and designing catalytic complexes or solids that generate chemical fuel from carbon dioxide and/or water; (2) integrating all essential elements, from light capture to fuel formation components, into an effective system; and (3) providing a pragmatic evaluation of the system under development. NSF’s Catalysis and Biocatalysis initiative has a specific goal of developing new materials that will be catalysts for converting sunlight into usable energy for direct use, or for conversion into electricity, or into fuel for use in fuel cell applications. Integrating solar energy into the grid. Officials from several initiatives reported focusing on demonstration activities for technologies with the broad goal of integrating solar or renewable energies into the grid or onto military bases. For example, DOE’s Smart Grid Research and Development initiative has a goal of developing smart grid technologies, particularly those that help match supply and demand in real time, to enable the integration of renewable energies, including solar energy, into the grid by helping stabilize variability and facilitate the safe and cost-effective operation by utilities and consumers. The goal of this initiative is to achieve a 20 percent improvement in the ratio of the average power supplied to the maximum demand for power during a specified period by 2020. DOD’s Installation Energy Research initiative has a goal of developing better ways to integrate solar energy into a grid system, thereby optimizing the benefit of renewable energy sources. Some initiatives may overlap on key characteristics such as technology advancement activities, types of technologies, types of recipients, or broad goals, but they also differ in meaningful ways that could result in specific and complementary research efforts, which may not be apparent when analyzing the characteristics. For example, an Army official told us that both the Army and Marine Corps were interested in developing a flexible solar substrate, which is a photovoltaic panel laminated onto fabric that can be rolled up and carried in a backpack. The Army developed technology that included a battery through its initiative, while the Marine Corps, through a separate initiative, altered the Army’s technology to create a flexible solar substrate without a battery. Other initiatives may also overlap on key characteristics, but the efforts undertaken by their respective projects may complement each other rather than result in duplication. For example, DOE officials told us that one solar company may receive funding from multiple federal initiatives for different components of a larger project, thus simultaneously supporting a common goal without providing duplicative support. While we did not find clear instances of duplicative initiatives, it is possible that there are duplicative activities among the initiatives that could be consolidated or resolved through enhanced coordination across agencies and at the initiative level. Also, it is possible that there are instances in which recipients receive funding from more than one federal source or that initiatives may fund some activities that would have otherwise sought and received private funding. Because it was beyond the scope of this work to look at the vast number of activities and individual awards that are encompassed in the initiatives we evaluated, we were unable to rule out the existence of any such duplication of activities or funding. Officials from 57 of the 65 initiatives (88 percent) reported coordinating with other solar-related initiatives. Coordination is important because, as we have previously reported, a lack of coordination can waste scarce funds and limit the overall effectiveness of the federal effort. We have also previously reported that coordination across programs may help address fragmentation, overlap, and duplication. Officials from nearly all initiatives that we identified as overlapping in their broad goals, types of technologies, and technology advancement activities, reported coordinating with other solar-related initiatives. In October 2005, we identified key practices that can help enhance and sustain federal agency coordination, such as (1) establishing joint strategies, which help align activities, core processes, and resources to accomplish a common outcome; (2) developing mechanisms to evaluate and report on the progress of achieving results, which allow agencies to identify areas for improvement; (3) leveraging resources, which helps obtain additional benefits that would not be available if agencies or offices were working separately; and (4) defining a common outcome, which helps overcome differences in missions, cultures, and established ways of doing business. Agency officials at solar-related initiatives reported coordination activities that are consistent with these key practices, as described below. Some agency officials reported undertaking formal activities within their own agency to coordinate the efforts of multiple initiatives. For example: Establishing a joint strategy. NSF initiatives reported participating in an Energy Working Group, which includes initiatives in the agency’s Directorates for Mathematical and Physical Sciences and for Engineering. Officials from initiatives we identified as overlapping reported participating in the Energy Working Group. NSF formed this group to initiate coordination of energy-related efforts between the two directorates, including solar efforts, and tasked it with establishing a uniform clean, sustainable energy strategy and implementation plan for the agency. Developing mechanisms to monitor, evaluate, and report results. DOD officials from initiatives in the Army, Marine Corps, and Navy that we identified as overlapping reported they participated in the agency’s Energy and Power Community of Interest. The goal of this group is to coordinate the R&D activities within DOD. The group is scheduled to meet every quarter, but an Army official told us the group has been meeting every 3 to 4 weeks recently to produce R&D road maps and to identify any gaps in energy and power R&D efforts that need to be addressed. Because of the information sharing that occurs during these meetings, the official said the risk of such duplication of efforts across initiatives within DOD is minimized. In responding to our questionnaire, agency officials also reported engaging in formal activities across agencies to coordinate the efforts of multiple initiatives. For example: Leveraging resources. The Interagency Advanced Power Group (IAPG), which includes the Central Intelligence Agency, DOD, DOE, NASA, and the National Institute of Standards and Technology, is a federal membership organization that was established in the 1950s to streamline energy efforts across the government and to avoid duplicating research efforts. A number of smaller working groups were formed as part of this effort, including the Renewable Energy Conversion Working Group, which includes the coordination of solar efforts. The working groups are to meet at least once each year, but according to a DOD official, working group members often meet more often than that in conjunction with outside conferences and workshops. The purpose of the meetings is to present each agency’s portfolio of research efforts and to inform and ultimately leverage resources across the participating agencies. According to IAPG documents, group activities allow agencies to identify and avoid duplication of efforts. Several of the initiatives that we identified as overlapping also reported participating in the IAPG. Leveraging resources and defining a common outcome. DOE’s SETP in the Office of Energy Efficiency and Renewable Energy (EERE) coordinates with DOE’s Office of Science and the Advanced Research Projects Agency-Energy (ARPA-E) through the SunShot Initiative, which according to SunShot officials, was established expressly to prevent duplication of efforts while maximizing agencywide impact on solar energy technologies. The goal of the SunShot Initiative is to reduce the total installed cost of solar energy systems by 75 percent. SunShot officials said program managers from all three offices participate on the SunShot management team, which holds “brain-storming” meetings to discuss ideas for upcoming funding announcements and subsequently vote on proposed funding announcements. Officials from other DOE offices and other federal agencies are invited to participate, with coordination occurring as funding opportunities arise in order to leverage resources. Officials said meetings may include as few as 25 or as many as 85 attendees, depending on the type of project and the expertise required of the attending officials. Additionally, DOE and NSF coordinate through the SunShot Initiative on the Foundational Program to Advance Cell Efficiency (F-PACE), which identifies and funds solar device physics and photovoltaic technology research and development that will improve photovoltaic cell performance and reduce module cost for grid-scale commercial applications. The initiatives that reported participating in SunShot activities also included many that we found to be overlapping. Developing joint strategies; developing mechanisms to monitor, evaluate, and report results; and defining a common outcome. The National Nanotechnology Initiative (NNI) an interagency program, which includes DOD, DOE, NASA, NSF, and USDA, among others, was established to coordinate the nanotechnology-related activities across federal agencies that fund nanoscale research or have a stake in the outcome of this research. The NNI is directed to (1) establish goals, priorities, and metrics for evaluation for federal nanotechnology research, development, and other activities; (2) invest in federal R&D programs in nanotechnology and related sciences to achieve these goals; and (3) provide for interagency coordination of federal nanotechnology research, development, and other activities. The NNI implementation plan states that the NNI will maximize the federal investment in nanotechnology and avoid unnecessary duplication of efforts. NNI includes a subgroup that focuses on nanotechnology for solar energy collection and conversion. Specifically, this subgroup is to (1) improve photovoltaic solar electricity generation with nanotechnology, (2) improve solar thermal energy generation and conversion with nanotechnology, and (3) improve solar-to-fuel conversions with nanotechnology. In addition to the coordination efforts above, officials reported through our questionnaire that their agencies coordinate through discussions with other agency officials or as part of the program and project management and review processes. Some officials said such discussions and reviews among officials occur explicitly to determine whether there is duplication of funding occurring. For example, SETP projects include technical merit reviews, which include peer reviewers from outside of the federal government, as well as a federal review panel composed of officials from several agencies. Officials from SETP also participate in the technical merit reviews of other DOE offices’ projects. ARPA-E initiatives also go through a review process that includes federal officials and independent experts. DOE officials told us that an ARPA-E High Energy Advanced Thermal Storage review meeting, an instance of potential duplicative funding was found with an SETP project. Funding of the project through SETP was subsequently removed because of the ARPA-E review process, and no duplicative funds were expended. In addition to coordinating to avoid duplication, officials from 59 of the 65 initiatives (91 percent) reported that they determine whether applicants have received other sources of federal funding for the project for which they are applying. Twenty-one of the 65 initiatives (32 percent) further reported that they have policies that either prohibit or permit recipients from receiving other sources of federal funding for projects. Some respondents to our questionnaire said it is part of their project management process to follow up with funding recipients on a regular basis to determine whether they have subsequently received other sources of funding. For example, DOE’s ARPA-E prohibits recipients from receiving duplicative funding from either public or private sources, and requires disclosure of other sources of funding both at the time of application, as well as on a quarterly basis throughout the performance of the award. Even if an agency requires that such funding information be disclosed on applications, applicants may choose not to disclose it. In fact, it was recently discovered that a university researcher did not identify other sources of funding on his federal applications as was required and accepted funding for the same research on solar conversion of carbon dioxide into hydrocarbons from both NSF and DOE. Ultimately, the professor was charged with and pleaded guilty to wire fraud, false statements, and money laundering in connection with the federal research grant. We provided DOD, DOE, EPA, NASA, NSF, and USDA with a draft of this report for review and comment. USDA generally agreed with the overall findings of the report. NASA and NSF provided technical or clarifying comments, which we incorporated as appropriate. DOD, DOE, and EPA indicated that they had no comments on the report. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretaries of Agriculture, Defense, and Energy; the Administrators of EPA and NASA; the Director of NSF; the appropriate congressional committees; and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. The objectives of our report were to identify (1) solar-related initiatives supported by federal agencies in fiscal years 2010 and 2011 and key characteristics of those initiatives and (2) the extent of fragmentation, overlap, and duplication, if any, among federal solar-related initiatives, as well as the extent of coordination among these initiatives. To inform our objectives, we reviewed a February 2012 GAO report that was conducted to identify federal agencies’ renewable energy initiatives, which included solar-related initiatives, and examine the federal roles the agencies’ initiatives support. The GAO report on renewable energy- related initiatives identified nearly 700 initiatives that were implemented in fiscal year 2010 across the federal government, of which 345 initiatives supported solar energy. For purposes of this report, we only included those solar-related initiatives that we determined were focused on research and development (R&D), and commercialization, which we defined as follows: Research and development. Efforts ranging from defining scientific concepts to those applying and demonstrating new and improved technologies. Commercialization. Efforts to bridge the gap between research and development activities and the marketplace by transitioning technologies to commercial applications. We did not include those initiatives that focused solely on deployment activities, which include efforts to facilitate or achieve widespread use of existing technologies either in the commercial market or for nonmarket uses such as defense, through their construction, operation, or use. Initiatives that focus on deployment activities include a variety of tax incentives. We also narrowed our list to only those initiatives that focused research on advancing or developing new and innovative solar technologies. Next, we shared our list with agency officials and provided our definitions of R&D and commercialization. We asked officials to determine whether the list was complete and accurate for fiscal year 2010 initiatives that met our criteria, whether those initiatives were still active in fiscal year 2011, and whether there were any new initiatives in fiscal year 2011. If officials wanted to remove an initiative from our list, we asked for additional information to support the removal. In total, we determined that there were 65 initiatives that met our criteria. To identify and describe the key characteristics of solar-related initiatives implemented by federal agencies, we developed a questionnaire to collect information from officials of those 65 federal solar energy-related initiatives. The questionnaire was prepopulated with information that was obtained from the agencies for GAO’s renewable energy report including program descriptions, type of solar technology supported, funding mechanisms, and type of funding recipients. Questions included the type of technology advancement activities, obligations for solar activities in fiscal years 2010 and 2011, initiative-wide and solar-specific goals, and coordination efforts with other solar-related initiatives. We conducted pretests with officials of three different initiatives at three different agencies to check that (1) the questions were clear and unambiguous, (2) terminology was used correctly, (3) the questionnaire did not place an undue burden on agency officials, (4) the information could feasibly be obtained, and (5) the questionnaire was comprehensive and unbiased. An independent GAO reviewer also reviewed a draft of the questionnaire prior to its administration. On the basis of feedback from these pretests and independent review, we revised the survey in order to improve its clarity. After completing the pretests, we administered the questionnaire. We sent questionnaires to the appropriate agency liaisons in an attached Microsoft Word form, who in turn sent the questionnaires to the appropriate officials. We received questionnaire responses for each initiative and, thus, had a response rate of 100 percent. After reviewing the responses, we conducted follow-up e-mail exchanges or telephone discussions with agency officials when responses were unclear or conflicting. When necessary, we used the clarifying information provided by agency officials to update answers to questions to improve the accuracy and completeness of the data. Because this effort was not a sample survey, it has no sampling errors. However, the practical difficulties of conducting any survey may introduce errors, commonly referred to as nonsampling errors. For example, difficulties in interpreting a particular question, sources of information available to respondents, or entering data into a database or analyzing them can introduce unwanted variability into the survey results. However, we took steps to minimize such nonsampling errors in developing the questionnaire—including using a social science survey specialist for design and pretesting the questionnaire. We also minimized the nonsampling errors when collecting and analyzing the data, including using a computer program for analysis, and using an independent analyst to review the computer program. Finally, we verified the accuracy of a small sample of keypunched records by comparing them with their corresponding questionnaires, and we corrected the errors we found. Less than 0.5 percent of the data items we checked had random keypunch errors that would not have been corrected during data processing. To conduct our analysis, a technologist compared all of the initiatives and identified overlapping initiatives as those sharing at least one common technology advancement activity, one common technology, and having similar goals. A second technologist then completed the same analysis, and the two then compared their findings and, where they differed, came to a joint decision as to which initiatives broadly overlapped on their technology advancement activities, technologies, and broad goals. If the two technologists could not come to an agreement, a third technologist determined whether there was overlap. To assess the reliability of obligations data, we asked officials of initiatives that comprised over 90 percent of the total obligations follow-up questions on the data systems used to generate that data. While we did not verify all responses, on the basis of our application of recognized survey design practices and follow-up procedures, we determined that the data used in this report were of sufficient quality for our purposes. We conducted this performance audit from September 2011 to August 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Tables 4, 5, 6, 7, 8, and 9 provide descriptions, by agency, of the 65 initiatives that support solar energy technologies and the obligations for those initiatives’ solar activities in fiscal years 2010 and 2011. In addition to the individual named above, key contributors to this report included Karla Springer (Assistant Director), Tanya Doriss, Cindy Gilbert, Jessica Lemke, Cynthia Norris, Jerome Sandau, Holly Sasso, Maria Stattel, and Barbara Timmerman.
The United States has abundant solar energy resources and solar, along with wind, offers the greatest energy and power potential among all currently available domestic renewable resources. In February 2012, GAO reported that 23 federal agencies had implemented nearly 700 renewable energy initiatives in fiscal year 2010-- including initiatives that supported solar energy technologies (GAO-12-260). The existence of such initiatives at multiple agencies raised questions about the potential for duplication, which can occur when multiple initiatives support the same technology advancement activities and technologies, direct funding to the same recipients, and have the same goals. GAO was asked to identify (1) solar- related initiatives supported by federal agencies in fiscal years 2010 and 2011 and key characteristics of those initiatives and (2) the extent of fragmentation, overlap, and duplication, if any, of federal solar- related initiatives, as well as the extent of any coordination among these initiatives. GAO reviewed its previous work and interviewed officials at each of the agencies identified as having federal solar initiatives active in fiscal years 2010 and 2011. GAO developed a questionnaire and administered it to officials involved in each initiative to collect information on: initiative goals, technology advancement activities, funding obligations, number of projects, and coordination activities. This report contains no recommendations. In response to the draft report, USDA generally agreed with the findings, while the other agencies had no comments. Sixty-five solar-related initiatives with a variety of key characteristics were supported by six federal agencies. Over half of these 65 initiatives supported solar projects exclusively; the remaining initiatives supported solar and other renewable energy technologies. The 65 initiatives exhibited a variety of key characteristics, including multiple technology advancement activities ranging from basic research to commercialization by providing funding to various types of recipients including universities, industry, and federal laboratories and researchers, primarily through grants and contracts. Agency officials reported that they obligated about $2.6 billion for the solar projects in these initiatives in fiscal years 2010 and 2011, an amount higher than in previous years, in part, because of additional funding from the 2009 American Recovery and Reinvestment Act. The 65 solar-related initiatives are fragmented across six agencies and overlap to some degree in their key characteristics, but most agency officials reported coordination efforts to avoid duplication. The initiatives are fragmented in that they are implemented by various offices across the six agencies and address the same broad areas of national need. However, the agencies tailor their initiatives to meet their specific missions, such as DOD's energy security mission and NASA's space exploration mission. Many of the initiatives overlapped with at least one other initiative in the technology advancement activity, technology type, funding recipient, or goal. However, GAO found no clear instances of duplicative initiatives. Furthermore, officials at 57 of the 65 initiatives (88 percent) indicated that they coordinated in some way with other solar-related initiatives, including both within their own agencies and with other agencies. Such coordination may reduce the risk of duplication. Moreover, 59 of the 65 initiatives (91 percent) require applicants to disclose other federal sources of funding on their applications to help ensure that they do not receive duplicative funding.
PBGC was created as a government corporation by the Employee Retirement Income Security Act of 1974 (ERISA) to help protect the retirement income of U.S. workers with private-sector defined benefit plans by guaranteeing their benefits up to certain legal limits. PBGC administers two separate insurance programs for these plans: a single- employer program and a multiemployer program. The single-employer program covers about 34 million participants in about 28,000 plans. The multiemployer program covers about 10 million participants in about 1,500 collectively-bargained plans that are maintained by two or more unrelated employers. If a multiemployer pension plan is unable to pay guaranteed benefits when due, PBGC will provide financial assistance to the plan, usually a loan, so that retirees continue receiving their benefit. However, if the sponsor of a single-employer plan is in financial distress and does not have sufficient assets to pay promised benefits, the plan will be terminated and PBGC will likely become the plan’s trustee, assuming responsibility for paying benefits to participants as they become due, up to the guaranteed benefit limits. As of the end of fiscal year 2008, PBGC had terminated and trusteed a total of 3,860 single-employer plans (see fig. 1). The single-employer program is financed through premiums paid by the plan sponsors, recoveries from the companies formerly responsible for the plans, and investment income from the assets that PBGC acquires when it assumes control of a plan. A three-member Board of Directors, consisting of the Secretaries of the Commerce, Labor, and Treasury, is charged with providing policy direction and oversight of PBGC’s finances and operations. We designated PBGC’s single-employer pension insurance program as “high risk” in 2003, including it on our list of major programs that need urgent attention and transformation. The program remains a high-risk concern due to an ongoing threat of losses from the terminations of underfunded plans. Financially, PBGC’s accumulated deficit totaled $33.5 billion at the end of the second quarter of fiscal year 2009, a $22.5 billion increase since the end of fiscal year 2008. Additionally, as we concluded in a recent report, PBGC’s governance structure and strategic management need improvement. We found that PBGC’s Board of Directors is limited in its ability to provide policy direction and oversight, and recommended that the board be expanded. Further, in two additional reports, we concluded that PBGC lacks a strategic approach to its acquisition and human capital management needs. Under the single-employer program, if a company’s pension plan has inadequate assets to pay all promised benefits, plan sponsors meeting certain criteria can voluntarily terminate a plan through a “distress” termination. PBGC may also decide to terminate an underfunded plan involuntarily to protect plan assets, and PBGC must terminate a plan if assets are insufficient to pay benefits currently due. In all these situations, PBGC generally becomes the trustee of the plan and assumes responsibility for paying benefits to the participants as they become due. Determining participants’ benefit amounts following termination, however, is a complex process (see fig. 2). It begins with gathering extensive data on plans and individuals’ work and personnel histories, and determining who is eligible for benefits under a plan, which can be complicated if the company or plan has a history of mergers, elaborate structure, or missing data. It requires understanding plan provisions that vary from plan to plan and can be numerous, applying the guarantee limitations to each individual’s benefit, and valuing plan assets and liabilities. If the participant is already retired, or retires before the process is complete, PBGC makes payments to the retiree based on an estimate of the final benefit amount. Once the process is complete, PBGC notifies each participant of the final benefit amount through a “benefit determination letter.” In cases with a final benefit that is greater than the estimated amount, retirees are likely due a backpayment for having been underpaid, which PBGC will repay in a lump sum, with interest. In cases with a final benefit that is less, the retirees are likely to have received an overpayment, which they then must repay to PBGC, with no added interest. When single-employer plans are terminated without sufficient assets to pay all promised benefits, PBGC guarantees participants’ benefits only up to certain limits, specified under statute in ERISA and related regulations. Participants whose benefits exceed these limits may have their benefits reduced to the guaranteed amounts, unless the plan has sufficient assets to pay the nonguaranteed portion of their benefits, either all or in part. These guarantee limits are commonly referred to as the maximum limit, the phase-in limit, and the accrued-at-normal limit (see table 1). One group often affected by the application of these limits is made up of those who take early retirement. The maximum limit is lowered for each year a person retires before age 65. Also, supplemental benefits—which are typically provided to early retirees as a bridge to when they become eligible for Social Security benefits—are eliminated or greatly reduced by the accrued-at-normal limit. Because many steelworkers and airline pilots retire before reaching age 65, retirees in these industries are hit particularly hard by the application of such limits. PBGC’s benefits are set based on the amounts accrued as of the date of plan termination. When a plan terminates, accruals cease. As a result, participants who are not yet retired are likely to receive lower benefits than what they would have received under their plans if they had been able to accrue further benefits. For example, if participants work for the plan sponsor beyond the termination date, the additional service would not be credited under that plan. The dollar amount or salary level used to calculate benefits is also frozen at the level in effect as of the date of plan termination, which can cause a participant’s benefit to be substantially less than it would have been if the plan had continued. Participants can also be affected when a plan’s termination date occurs before they become eligible for certain benefits, such as early retirement or disability benefits. For retirees and participants who retire prior to completion of the benefit determination process, estimated benefits are provided that can sometimes be greater than the final benefit amount, causing an overpayment. In addition to having benefits reduced due to the guarantee limits, some retirees’ have their monthly benefit reduced once their benefit amount is finalized because they are required to repay an overpayment that was incurred while receiving estimated benefits. Most participants of terminated plans receive the full amount of the benefits they have earned under their plans, according to studies conducted by PBGC. PBGC does not systematically track the number of participants affected by guaranteed benefit limits or how much these limits affect benefit amounts; however, PBGC has conducted two studies on the impact of these limitations in a sample of large plans. The first study, issued in 1999, found 5.5 percent of participants were affected by the limits; and the second study, issued in 2008, found that 15.9 percent were affected. PBGC attributed the increase in the numbers affected in the second study to the inclusion of several large plans from the steel and airlines industries. Officials noted that these plans were more likely to be subject to the limits. Steel plans often provide supplements and allow retirement with unreduced benefits after 30 years of service, regardless of age, and airline plans often allow pilots to retire early and receive generous benefits. Across the different plans in both studies, participants’ reductions in benefits varied widely, from less than 5 percent for some, to over 50 percent for others. PBGC makes most benefit determinations within 3 years after assuming trusteeship of a plan. However, complex plans and plans with missing data have required more time to process—up to 9 years, in some instances (the full time span we examined). Most of the benefit determinations that took 4 or more years to process were for participants in just 10 plans. PBGC officials have taken steps to shorten the benefit determination process, although their initiatives have focused on ways to expedite processing of straightforward cases instead of the processing of difficult cases prone to delays. PBGC becomes the trustee of most plans within 10 months of termination and, once it has assumed trusteeship of a plan, the agency takes slightly less than 3 years to process most benefit determinations and notify participants of their final benefit amount. Following a PBGC Inspector General study, issued in 2000, that found that the majority of benefit determination letters were sent more than 5 years after PBGC assumed trusteeship of the plan, PBGC set a corporate goal of issuing benefit determinations, on average, no more than 3 years after trusteeship. Our review of the benefit determinations for participants in plans trusteed during fiscal years 2000 through 2008 indicates that PBGC has moved processing times closer to this mark. Nearly three-quarters of the benefit determinations completed for these plans were made in 3 years or less (see fig. 3). The vast majority of all completed benefit determinations—95 percent—was processed in less than 4 years’ time. On the other hand, in February 2009, more than 200,000 participants were awaiting benefit determinations that had been pending for an average of 3 or more years. PBGC practice is to prioritize benefit determinations based on an individual’s retirement status at the time of plan termination. For example, participants who were retired when their plans terminated received their benefit determinations in about 2.0 years after PBGC assumed trusteeship, on average. Participants who had separated from employment under the plan but had some vested benefits at the time of the termination received benefit determinations in about 2.8 years, on average. All other participants received benefit determinations in about 2.8 years, on average. Processing times have varied considerably in any given year, due in part to the number and size of plans being terminated and trusteed that year (see fig. 4). The number of plans trusteed by PBGC peaked during 2002, 2003, and 2004, although the largest influx of participants occurred in 2005. The average number of participants per plan is slightly fewer than 1,000, but some plans have many more. For example, the Bethlehem Steel plan has nearly 93,000 participants, the LTV Steel (hourly) plan has about 68,000 participants, and the Kaiser Aluminum and Chemical Corp. (hourly) plan has just over 10,000 participants. We found that processing times were longer, on average, for those plans trusteed in peak years (see fig. 5). For example, processing times generally increased during fiscal years 2002 through 2005. Processing times have also increased with the complexity of plans and the unavailability of needed data. Obtaining plan documents, gaining complete participant data, and interpreting plan requirements often present difficulties. Nevertheless, nearly three-quarters of the benefit determinations that took 4 or more years to process were for participants in just 10 of the 1,089 plans terminated and trusteed during fiscal years 2000 through 2008, as shown in figure 6. These plans were sponsored by four steel companies, two mining companies, one other manufacturer, an insurance company, and a construction company. We found that a variety of factors had contributed to the complexity of the 10 plans with these lengthier determinations. One key factor was the level of difficulty of calculating benefits. For some, a history of company or plan mergers, or other unusual or complicated benefit formulas, made determining a participant’s benefit more difficult and added to processing time. For example, the pension plan of Bethlehem Steel Corporation— which still had some benefit determinations pending as of February 2009, nearly 6 years after the plan’s trusteeship—is a product of more than 100 company mergers, consolidations, and/or spinoffs. There are eight major parts to this plan, and three of the parts have separate hourly and salaried plans. In general, if a plan has undergone a merger, participants may be covered by different plan provisions, or participants may transfer between component plans, such as moving from an hourly to salaried plan. According to PBGC, the Bethlehem Steel plan required an analysis of more than 30 sets of plan documents to make benefit determinations for the nearly 93,000 participants. Unusual or numerous plan provisions have also made benefit determinations more challenging and, therefore, time consuming. The Cone Mills Corporation plan consists of three merged plans. In 2001, the company’s plans for long-distance drivers and salaried workers were merged into its plan for hourly workers. Yet, distinct provisions in each of the original plans remained in place for their respective members. It required time for PBGC to understand which participants belonged to each group and the provisions associated with each participant. In other cases, an elaborate plan structure has also made it challenging for PBGC to determine the availability of plan assets and to distribute them across different categories of participants’ benefits in the asset allocation process. The Kaiser Aluminum and Chemical Corp. had 26 direct and indirect subsidiaries in its controlled group and in bankruptcy; 36 subsidiaries not in bankruptcy; and 13 operating subsidiaries and joint ventures not in the controlled group or in bankruptcy. Kaiser had eight defined benefit plans, seven of which were trusteed by PBGC, and the assets for these eight plans were commingled, which added complexity to PBGC’s audit of the plans’ net worth. Benefit guarantee limits contributed to the complexity of several plans. PBGC must determine, on a participant-by-participant basis, the level of benefits each is entitled to under ERISA and related regulations. According to PBGC officials, these calculations can be time consuming when there are a large number of participants receiving benefit adjustments as a result of these limits. For example, there were several benefit rate increases in the LTV Steel (hourly) plan that went into effect within 5 years of the plan’s termination and, therefore, were subject to the phase-in limit. These included a plant shutdown supplement for certain participants, a surviving spouse’s special payment, and additional continuous service for participants affected by certain layoffs. In total, there were 35,279 participants whose benefits were affected by the phase- in limitation under this plan, as well as 4,850 affected by the accrued-at- normal limit, and 3,644 affected by the maximum limit. Qualified domestic relations orders have also contributed to the complexity of making a benefit determination. When participants have domestic relations orders related to child support, alimony payments, and marital property rights, some portion of, or all of, a participant’s pension benefits may be assigned to a spouse, former spouse, child, or other dependent. In these cases, PBGC must determine whether the order is a qualified domestic relations order, a process which can entail a detailed review of legal documents. Although nearly two-thirds of the plans we examined did not have any participants with qualified domestic relations orders, several of the ten plans associated with the lengthiest processing times had numerous participants with such orders. For example, the Bethlehem Steel plan included 904 participants with qualified domestic relations orders, and the LTV Steel (hourly) plan included 609. The condition of plan and participant data is also a key factor affecting processing times. When a plan terminates, PBGC tries to obtain all plan documents, such as the original plan, plan amendments, and, if applicable, negotiated agreements with unions, as well as personnel and payroll data. To do so with the termination of a large, complex plan, PBGC auditors have usually visited sponsor locations to collect data and contacted the plan’s actuarial staff, administrators, or others responsible for managing the plan’s assets. When the plan’s administration is decentralized, this process involves collecting records from different locations in the course of many site visits. For example, over a 2-month period, a PBGC audit team visited Bethlehem Steel facilities in Sparrows Point, MD; Bethlehem, PA; Coatesville, PA; Steelton, PA; Lackawanna, NY; and Burns Harbor, IN to collect records. Data were not always available in electronic form. The Bethlehem Steel Lackawanna facility, for example, did not use an electronic recordkeeping system, so PBGC collected more than 20,000 hard-copy employee record cards from the site. According to PBGC officials, plan sponsors have frequently diverted resources away from actuarial and information technology services during rough financial periods, causing records maintenance to deteriorate before PBGC is able to take over the plan. In such situations, data become difficult to locate, key personnel with knowledge of the data leave the organization, and data systems may be inaccessible. Additionally, the data PBGC is able to collect has often been incomplete. As a result, PBGC actuaries sometimes have to make assumptions about which plan provisions apply to whom when estimating the plan’s assets and liabilities, and calculating individual participants’ benefits. When processing the Weirton Steel plan, for example, PBGC was required to calculate benefits for some participants whose average monthly earnings were missing. A PBGC official told us that they sometimes use collective bargaining agreements and board resolutions, even if their legality cannot be verified, if those documents provide the best information available. To avoid situations where data are missing or in poor condition, PBGC officials told us they generally try to obtain data prior to taking over a plan. In most situations, they will quickly try to assess the location and condition of plan records, and take steps to preserve the records in the event that PBGC takes over the plan. However, officials acknowledged that negotiations between PBGC and plan sponsors prior to trusteeship have sometimes deterred them from using their access authority to secure records until after actually becoming the trustee. For example, the RTI case involved a lengthy legal deliberation over the plan’s termination date, and while this litigation was ongoing prior to trusteeship, PBGC’s case processing division did not pursue documents from RTI prior to trusteeship, on the advice of the agency’s and company’s counsel at the time. PBGC officials noted that when aspects of termination are being contested, it is not uncommon for company officials to be unwilling to share information until after PBGC’s trusteeship is official. In the RTI case, by the time the court case was resolved and PBGC became the trustee, a new owner had assumed control of the personnel files, documentation needed to determine benefit entitlement had been purged, and only one person remained with working knowledge of the RTI pension plan. PBGC officials have taken steps to shorten the benefit determination process, although these initiatives do not specifically address complex cases. Rather, PBGC officials said that their initiatives are intended to process straightforward cases more quickly so that staff can concentrate on those that are difficult. Specifically, PBGC adopted a simplified data validation process to speed the processing of plans with fewer than 200 participants. They decided that the validation process used for large plans, which involves a full electronic data audit and a review of all data elements by an auditor, was unnecessary for smaller plans, which have fewer participants and less data, making any errors highly visible. PBGC has also prioritized benefit determinations for retirees who have been receiving benefits for some time. Such determinations are more straightforward because these retirees are less likely to have their benefits reduced by the guarantee limits. These efforts help PBGC to avoid unnecessary delays in straightforward cases. PBGC does not, however, target its changes on complex plans with benefit determinations most prone to lengthy delays. Nor does PBGC set benchmarks for complex cases or goals for decreasing the processing time for these cases. Officials acknowledged that the current tracking of timeliness focuses on average processing times only. Overpayments have been infrequent and the impact on benefit amounts has been generally minor. As with the cases that required lengthy processing times, most of the cases in which overpayment occurred have been concentrated in a small number of plans. These tended to be large plans with large numbers of retirees, as well as plans whose total asset values were difficult to determine or anticipate. Meanwhile, PBGC’s requirement for repayment of overpayments is highly amortized, thereby limiting the amount of money that PBGC will recoup. By comparison, some other federal agencies have more aggressive repayment policies, but more liberal waiver policies for cases of hardship. Overpayments generally occur when a plan retiree receives estimated benefits while PBGC is in the process of making benefit determinations and the final benefit amount is less than the estimated benefit amount. Our review of plans terminated and trusteed during fiscal years 2000 through 2008 found that this happened only in a small percentage of cases (see fig. 7). Of the 1.1 million participants in plans terminated and trusteed during fiscal years 2000 through 2008, more than half were not yet retirees and, therefore, did not receive estimated benefits before the benefit determination process was complete. For most who were retirees, the estimated benefit amount received did not change when finalized. Of those whose benefit amount did change when finalized, about half received a benefit that was greater and half received a benefit that was less (about 3 percent of total participants in these plans, overall). According to PBGC data on recoupments, 22,623 participants in plans terminated and trusteed during fiscal years 2000 through 2008 owed PBGC for overpayments. These amounts varied widely—from less than $1 to more than $150,000—but our analysis of PBGC data suggests that most owed less than $3,000. Since in most cases PBGC recoups no more than 10 percent of a participant’s final benefit each month, the impact on the participant’s benefit was limited. Per individual, the median benefit reduction due to recoupment was about $16 a month, or about 3 percent of the monthly payment amount, on average. Per case, the median amount that had been repaid, as of February 2009, was $365. Of the 1,089 plans terminated and trusteed during fiscal years 2000 through 2008, just 10 accounted for more than 65 percent all cases of overpayment (see fig. 8). Nine of these 10 plans were sponsored by steel companies trusteed by PBGC from 2001 to 2003. When PBGC assumes responsibility for a plan, retirees generally continue to receive an estimated benefit that is the same as what they had been receiving, unless PBGC determines they are subject to any of the guarantee limits, and that their estimated payments need to be reduced to reflect these limits. In such cases, overpayments can occur for two basic reasons: (1) there is a period of time when the retiree’s estimated benefit has not yet been reduced to reflect applicable limits; and (2) the retiree’s estimated benefit is adjusted to reflect applicable limits, but the estimate is still greater than the benefit amount that is ultimately determined to be correct once the benefit determination process is complete. determination process is complete. As summarized in table 2, of the 10 plans with the greatest number of overpayments, 9 also had large numbers of participants, including many who were subject to the guarantee limits and who were retired and receiving estimated benefits. In addition, all these plans had assets or recoveries allocated to pay some, but not all, of retirees’ nonguaranteed benefits, which are generally some of the first nonguaranteed benefits to be paid from the allocation process—before, for example, future retirees’ nonguaranteed benefits. According to PBGC officials, uncertainty about how much a plan’s assets or recoveries will be able to contribute toward a retiree’s benefit that the agency does not guarantee, under law, can make it difficult to calculate an accurate benefit amount until the benefit determination process is complete. Finally, a lengthy benefit determination process can exacerbate the impact of inaccurate estimates. The total overpayment can become substantial over a long period of time, even if the difference between the estimated and final monthly benefit amount is small. Also, when plans are terminated involuntarily, there can sometimes be lengthy delays before PBGC reduces estimated benefits to reflect guarantee limits. Among the 10 plans with the most overpayments, all were involuntary terminations, and we found that the length of time between plan termination and when estimated benefits were adjusted to reflect guarantee limits varied widely. In some cases, estimated benefits were adjusted within 9 months of termination, while in other cases, more than 6 years elapsed before estimated benefits were adjusted—and in general, the longer the delays, the larger the overpayments. In contrast, when plans are terminated at the sponsor’s request as distress terminations, the sponsors are required to impose these limits themselves so that participants’ benefits are reduced as of the date of termination. The following examples illustrate how the above circumstances can combine to create large numbers of cases with overpayments among some plans. We chose these two case examples from among the cases sampled in the 10 plans with the most overpayments to illustrate the two types of situations that can result in overpayments outlined previously: (1) delayed adjustment of the retiree’s estimated benefit to reflect applicable limits; and (2) timely, but inaccurate adjustment of the retiree’s estimate to reflect applicable limits. We also chose these two case examples specifically because they had similar benefit amounts prior to termination. In the RTI (USWA) plan, four large groups of participants were affected by the guarantee limits: (1) those with six different types of temporary supplements who were subject to the accrued-at-normal limit; (2) former Bar Technologies employees whose benefits were subject to a $20 or 20 percent phase-in limit; (3) those who retired or will retire with 30 years of service and were subject to a $60 or 60 percent phase-in; and (4) those who retired under the early retirement program whose benefits were subject to a $60 or 60 percent phase-in. To explore the impact of guarantee limits on the retirees who incurred overpayments, we randomly selected 5 participants from among the 1,693 subject to the phase-in limits, and found that all were retirees who had their benefits reduced between 19 percent and 63 percent from what they had been receiving prior to termination. In three cases, estimated benefits were adjusted to reflect these limits 2.3 years after termination, but in two cases, estimated benefits were not adjusted prior to issuance of the benefit determination letter, which took place more than 6 years after termination. Due to inaccurate estimated benefits that were paid over several years, all 5 had incurred overpayments, ranging from $2,000 to about $57,000, and as a result, their benefits were reduced further to recoup the amounts owed. The effect on the monthly payment for one RTI retiree, whom PBGC overpaid by a total of $23,986, is illustrated in fig. 9. Ultimately, this retiree’s payment was reduced by almost two-thirds, mostly due to guarantee limits. In the Weirton plan, we found that large numbers of participants were subject to the accrued-at-normal limits due to various plan supplements; and were subject to the phase-in limits due to seven different types of benefit changes made within 5 years before plan termination. In addition, many participants were subject to the maximum limits, in part due to the aggregate limit imposed when participants are involved in more than one terminated plan (many participants had worked previously for National Steel or other PBGC-trusteed plans). We reviewed five randomly-selected cases from among the 1,342 participants who were subject to the accrued- at-normal limit and found that all were retirees whose estimated benefit amounts were inaccurate for at least part of the period involving the benefit determination process. One case resulted in an underpayment, with a backpayment of $11,384 to be repaid to the retiree, plus interest. The other four cases resulted in overpayments, ranging from $3,200 to just over $6,000, with reductions in benefit payments to recoup the amounts overpaid. In contrast with the five sampled RTI participants, these retirees had their benefits adjusted more quickly to reflect the guarantee limits so that, in general, the overpayments incurred were not as large. All 4 had their estimated benefits adjusted in less than 9 months. The effect on one Weirton retiree’s monthly payment is illustrated in fig. 10. As was the case in the previous example, this retiree’s payment was ultimately reduced by nearly one-half, mostly due to guarantee limits. Our analysis of PBGC data indicates that the overpayments owed by participants in plans terminated and trusteed during fiscal years 2000 through 2008 totaled almost $100 million. Of this total, about $14 million had been recouped, as of February 2009. However, PBGC’s policy of restricting recoupments to no more than 10 percent of the recipient’s monthly benefit results in a long amortization period for collection that can well exceed normal life expectancies. Since PBGC does not pursue further collection from a participant’s estate once a retiree (and any beneficiary) dies, a substantial portion of these overpayments will not be repaid. Specifically, for many of these individuals, it was projected that these debts would not be fully paid until the year 2099, PBGC’s arbitrary cutoff. Nearly 60 percent of those with future recoupments would not finish repaying these debts until the year 2020 and beyond. We analyzed the ages of retirees and/or beneficiaries at their projected end date of recoupment for all cases involving overpayments greater than $10,000. Although these cases accounted for fewer than 10 percent of those with overpayments, the amounts they owed accounted for more than 40 percent of total recoupments. We found that about 60 percent of these individuals would be age 80 or older, and over 30 percent would be age 100 or older, when their debts to PBGC would be fully repaid (see fig. 11). The life expectancy for those age 65 in 2009 is estimated to be 82 to 87 years. Once overpayments have been made, finding the right balance between agency fiscal responsibility and fairness to participants can be difficult to achieve. Compared with PBGC’s policy on overpayments, federal agencies such as the Social Security Administration (SSA) and the Office of Personnel Management (OPM) generally allow larger reductions to benefits when recouping overpayments, but their policies also give much greater prominence to waivers. PBGC policy stipulates that in cases with an ongoing payment, recoupment of an overpayment may not be waived unless the monthly reduction would be less than $5. Waivers for hardship are to be considered only in cases for which there is no ongoing payment to the participant. According to the agency’s general counsel and subsequent comments from agency officials, since the outset of 2009, PBGC has been receiving hardship waiver requests in recovery cases at more than twice the rate received the prior year. In contrast, both SSA and OPM policies on overpayments allow hardship consideration for cases with ongoing payments. For overpayment of Social Security benefits, SSA will withhold the full amount of the benefit each month until the overpayment is fully recouped. However, in its fact sheet on overpayments with respect to Social Security benefits and Supplemental Security Income (SSI) benefits, available on its Web site, SSA devotes over half the document to detailing the steps participants should take if they wish to either appeal or request a waiver. For SSI benefits, SSA will withhold 10 percent of the maximum federal benefit rate each month, but the beneficiary can request a lesser withholding amount, subject to SSA approval. Further, if the beneficiary disagrees with the overpayment, he or she can appeal or request that collection be waived. Similarly, OPM’s policy guidance on overpayments of retirement benefits devotes over half the document to the subject of waivers. Under law, OPM is directed not to recover overpayments when the beneficiary bears no responsibility for the overpayment and requiring repayment would be “against equity and good conscience.” In deciding whether to grant a waiver, errors or delays by OPM may be considered, along with financial hardship or any other basis for equity that OPM deems appropriate. Just the last 7 pages of this 34-page policy guide are devoted to policies on collections. These policies call for overpayments of federal employee retirement benefits to be collected in one lump sum, whenever feasible. If one lump-sum payment is not feasible and recoupment is by installment, the payments are to be sufficient in size and frequency to recoup the debt in no more than 3 years. The standard rate of collection is 10 percent of the net monthly annuity or $50 per month, whichever is higher; but if a 10 percent reduction will not result in full recoupment within 3 years, the reduction rate can be increased up to 50 percent. PBGC’s initial communications with participants shortly following termination—especially its on-site information sessions—generally drew praise from the pension advocacy groups and union representatives we interviewed. These groups’ concerns with PBGC’s communication efforts most often focused on the long gaps between contacts when the benefit determination process was lengthy and the complicated calculations that accompanied letters notifying participants of significant benefit reductions. PBGC’s first communication with participants is generally a letter informing them that their pension plan has been terminated and that PBGC has become the plan trustee. Shortly thereafter, this letter is generally followed by a more detailed letter with a packet of materials, including a DVD with an introduction to PBGC and frequently-asked questions about how the benefit determination process works. PBGC officials refer to this as a “welcome” package. Additionally, for large plans likely to have many participants affected by the guarantee limits, PBGC will hold on-site information sessions shortly after plan termination. PBGC also operates a customer service center with a toll-free number that participants can call if they have questions, provides a Web site for workers and retirees with detailed information about plans and benefits, and sends participants a newsletter with information about PBGC once or twice per year. Nearly all pension advocacy groups and union representatives we spoke with praised PBGC’s efforts to hold information sessions with the larger plans. One union representative commended PBGC staff for going out into the field to talk with participants and answer questions even though participants are going to be angry. Other union representatives commented that they have been impressed by PBGC’s staff for staying at these sessions until they have answered every participant’s questions. While these sessions are generally viewed as helpful, some pension rights advocates noted that the information presented is difficult for participants to understand, and may not have the same meaning when talked about in generalities as when they later receive notices concerning their specific benefits. Also, since not everyone may attend these events, these advocates believe it is important for all the information presented at the sessions to be provided through written communication as well. PBGC’s customer service center and Web site received mixed reactions from the pension rights advocates and union representatives we interviewed. A few noted that some of their members reported receiving good service from the toll-free number while others found the service frustrating or useless. One union representative said that the center’s staff use PBGC terminology, which may be different from the plan and benefit language that is familiar to their members. However, other groups we spoke with were generally more positive regarding their own direct communications with PBGC staff, describing PBGC staff as forthcoming and responsive to their inquiries. Similarly, the groups we interviewed generally found the information on PBGC’s Web site useful, but they expressed doubt that this would be the case for most of their members. They noted that many people whose plans are taken over by PBGC are not accustomed to using a computer or do not have access to the internet, and that some do not feel comfortable relying on information they find on a Web site. Following the initial contacts, PBGC generally does not communicate with participants again until the benefit determination process is complete, which in some cases can stretch into years. Among the participants’ files we examined when the benefit determination process took 4 or more years, we found that there often was no contact from PBGC for most of this time. For example, we examined the files of five randomly selected Bethlehem Steel participants whose benefit determinations were still pending as of February 2009, and found that—aside from one instance of an acknowledgment of a form submitted by one participant—PBGC had not communicated with these participants for more than 5 years. The last PBGC-initiated communications were dated late 2003 or early 2004. Some of the pension advocacy groups and union representatives we spoke with said that these long periods without communication are problematic for participants for several reasons. For example, retirees whose benefits are subject to the guarantee limits but who continue to receive their higher plan-level benefits for long periods of time may come to expect that these higher amounts are permanent, and then they are surprised when—years later—their benefits are suddenly reduced. Even for participants who are not yet receiving benefits, the lack of communication about the likely amount of their final benefits makes it difficult to plan for retirement. Some groups noted that PBGC does not always provide realistic time frames for completing the benefit determination process, and does not periodically update participants on the status of benefit processing. Two groups suggested it would be helpful if PBGC provided updates at least every 6 months. When participants are notified of a payment amount—whether estimated or final—PBGC’s letters generally provide only limited explanations for why the amount may be different from the amount provided under their plan. In complex plans, when benefit calculations are complicated, the letters do not adequately explain why benefits are being reduced, and although benefit statements are generally attached, the logic and math involved can be difficult even for pension experts. The standard language used in these letters to explain a different estimated amount states: “We have adjusted the amount of your benefit because there are legal limits on how much we can pay.” The standard language used to explain a different final benefit amount states: “Your final monthly benefit of is the amount that the PBGC is legally allowed to pay you. It was calculated by determining the benefit you are entitled to in your plan and then applying the limits spelled out in federal pension law.” These letters generally provide no specific information about which limits apply or why. However, enclosed with each benefit letter is a detailed attachment that shows the line-by-line calculations leading to the benefit amount, referred to as a “benefit statement.” In the participant files we reviewed, these benefit statements ranged in length from 2 to 8 pages, and were very difficult to understand. In some cases, there were as many as 20 to 30 different line items that required making comparisons between the items to understand the logic of the calculations. (See sample letter provided in appendix VII.) Some pension advocates and union representatives we spoke with said that they found the explanations in these letters to be too vague and generic, and that the letters did not provide enough information specific to the individual’s circumstances to be helpful. This was especially true in cases where participants were shocked or confused by a large benefit reduction. Moreover, some said they did not think most participants would be able to understand the accompanying benefit statements without additional information and assistance—especially for complex cases, according to one advocate. At the same time, they were generally sympathetic to the difficulty of communicating such complicated information. As one advocate acknowledged, for the letters to be accurate, they have to be complicated; this may just be “the nature of the beast.” Nevertheless, they said that PBGC could take some steps to improve the letters. For example, for those likely to incur overpayments, they suggested providing an example of how the recoupment process works. For those with complex benefit statements, they suggested that PBGC provide more text to help explain each step of the calculations, and include referrals to pension rights groups for obtaining additional information and assistance. In addition, we found a number of errors in the correspondence with participants, although we reviewed only a small sample of letters for participants in certain complex plans. For example, we found a number of cases with corrected benefit determination letters and other correspondence that had been sent to rectify various errors, such as the failure to account for overpayments, or inaccurate end dates for recoupment. We also identified some errors in the payment amounts or other information in the letters that we brought to PBGC’s attention to be corrected. PBGC has developed more than 500 letter formats—in both English and Spanish—to address the myriad of situations that may arise in the benefit determination process. Nevertheless, PBGC officials acknowledged that their standard letter formats may not always meet the needs of participants, especially those in complex plans, and they recently undertook a project to review and update their letters to try to better meet participant needs. According to PBGC officials, in September 2008, they began rolling out about 50 different versions of key letters to fit different circumstances. They also noted that the amount of detail and length of the benefit statements has varied over time—sometimes longer, sometimes shorter. Most recently, they have tended toward longer. They commented, however, that they are not sure it makes a difference either way, because for the most part, participants react to the benefit amount, not to the steps PBGC has used to arrive at the amount. Finally, they also noted that while the benefit amounts in the letters are verified by actuaries, the letters are prepared manually by Field Benefit Administration staff, using the standard formats, and until recently, these letters were not reviewed. Beginning in early 2009, however, plan analysts have started to review the letters before mailing. Since streamlining its appeals process in 2003, PBGC has responded more quickly to correspondence sent to its Appeals Division (see fig. 12). It has reduced the average amount of time to decide an appeal by almost a year and has cut the average amount of time needed to resolve all appeals- related inquiries in half. At the same time, most appeals docketed since 2003 have not resulted in appellants receiving higher benefit amounts. A lack of understanding on the part of participants about how their benefits are calculated may contribute to unnecessary appeals. PBGC’s appeals process was restructured in 2003 to create a triage system that makes more efficient use of agency resources and resolves cases more quickly. Previously, PBGC treated nearly every correspondence sent to its Appeals Division as an appeal. The agency now evaluates correspondence to determine if it raises a question about how the plan was interpreted, how the law was interpreted, or the practices of the plan’s sponsor and dockets correspondence as an appeal if it meets these criteria based on regulations. In analyzing appeals correspondence associated with plans trusteed by PBGC from fiscal year 2000 to fiscal year 2008, we found that since 2003, PBGC docketed as an appeal less than one-third of the correspondence received by the Appeals Division (see fig. 13). Correspondence concerning corrections to personal data, such as a participant’s date of hire or length of service, is now directed to PBGC’s Benefits Administration and Payment Department (Benefits Department) so that a corrected benefit determination can be issued more expeditiously. Additionally, in instances where a potential appellant requests a more detailed explanation of his or her benefit determination, the Benefits Department can quickly provide a detailed explanation based on its familiarity with the benefit calculation and relevant participant data. Further, under this triage approach, the Appeals Board staff, rather than the Appeals Board, responds to appeals received before a benefit determination has been issued or to claims that PBGC’s recovery of overpayments create a financial hardship and should be waived. Since streamlining the appeals process, PBGC has reduced its response time for appeals and other appeals-related inquiries without increasing the size of its appeals staff. According to agency data, PBGC reduced its average time for closing docketed appeals from 2.3 years to 1.4 years since implementing this triage approach. In fact, since fiscal year 2005, PBGC has averaged a response time of less than 10 months (see fig. 14). PBGC has also reduced the average age of pending appeals from about 2 years to less than 9 months, since implementing its triage approach. We also found, on examining the 14,545 appeals-related correspondences associated with plans trusteed from fiscal year 2000 to fiscal year 2008, that PBGC responded to all correspondence in an average of less than 4 months after 2002 (fiscal years 2003 through 2009), as compared to an average of about 8 months prior to 2003 (fiscal years 2000 through fiscal year 2002). However, there were also 852 cases of correspondence which had been pending for an average of nearly 7 months, as of April 2009. The procedural requirements of the appeals process do not appear to present barriers to appellants. Appellants are to provide a specific reason for their appeals and submit them within 45 days of their benefit determinations. Of the 3,637 closed appeals we examined, only 37 were closed because the appellant did not conform to a procedural requirement. Additionally, PBGC readily grants extensions. Within the correspondence we examined, PBGC granted 2,371 extension requests during fiscal years 2000 through 2008. More than 80 percent of appeals resulted in appellants receiving no increase in their benefit amounts. Of the 4,337 correspondences that were docketed as appeals since the beginning of fiscal year 2003, 3,637 had been decided as of April 2009. In most of these cases, the appeal decision resulted in no change to the participant’s benefit determination amount (see fig. 15). However, appellants received a higher benefit amount in 18 percent of the cases. For example, in one of the successful appeals, a Bethlehem Steel participant submitted copies of his medical records with his appeal, convincing the Appeals Board that he was eligible to receive a “permanent incapacity” benefit. In another case, a participant in the US Airways Inc. (pilots) plan had US Airways Inc. furnish documentation to PBGC that his date of hire had been adjusted as the result of a lawsuit, and with this new date of hire, PBGC considered the participant vested. In cases with no change in the participant’s benefit determination amount, the amount of overpayment can grow significantly during an appeal. While cases are appealed, PBGC typically places a hold on any change in benefit until the appeal is resolved. Thus, in cases where the benefit determination amount is less than the estimated amount, the participant may continue to receive the higher estimated amount during an appeal. If the lower amount is ultimately upheld, we found that these continued higher payments could add significantly to the amount of the participant’s overpayment—more than $10,000 in some cases. Although some appellants have successfully used the appeals process to increase their benefits, PBGC is not readily providing key information that would be helpful to participants in deciding whether or not to pursue an appeal. For example, the information PBGC provides on how it arrives at its benefit calculations can be difficult for potential appellants to understand. Plan provisions and guarantee limitations are often complicated, and it may be difficult for the average individual to interpret PBGC’s benefit calculations, especially for complex plans. Based on Appeals Board findings, it appears that participants sometimes file appeals because they do not understand how the guarantee limitations affect their benefits. For example, the Appeals Board denied one Weirton Steel participant’s appeal by explaining that the participant’s estimated benefit included a temporary supplement that, ultimately, was not payable due to the accrued-at-normal limitation. In another case, the Appeals Board concluded that an Outboard Marine participant simply did not understand PBGC’s benefit statement and explained the accrued-at-normal, maximum, and phase-in limitations, while denying the participant’s appeal. Even pension counselors and union representatives, who are knowledgeable about pensions and have experience filing appeals with PBGC, had difficulty understanding the materials provided to participants about their benefits . Several of the pension counselors and union representatives we interviewed told us that they have established contacts at PBGC who help them understand benefit determinations in appeals cases, and they, in turn, help convey this information to the participants they serve. Some have even held three-way calls with PBGC’s customer service center and participants, so that they can help participants understand the information provided by PBGC. Additionally, representatives from the pension counseling centers we spoke with have actuarial support they consult for help interpreting complicated benefit calculations. In some cases, by assisting participants in understanding their benefit calculations better, pension counselors told us they can also help participants avoid unnecessary appeals. Some of those we interviewed also told us that a complete understanding of a participant’s benefit determination—which is important for an effective appeal—cannot be obtained from a benefit determination letter alone. Several of these pension counselors and union representatives commented that they routinely file Freedom of Information Act requests, on a participant’s behalf, to obtain more information about a participant’s case from PBGC when preparing an appeal because there is not sufficient information in the benefit determination letter. Although PBGC provides a guide on how to use these requests on its Web site, PBGC’s communications materials about the appeals process do not provide a description of how individuals can gain access to PBGC’s full benefit calculation records through a Freedom of Information Act request. The current economic downturn has already brought a new influx of pension plan terminations to PBGC, and more are expected to follow. While our findings reveal a reasonably good record of processing beneficiary cases and assuming responsibility for the payment of benefits since 2000, the loss of jobs at this time, as well as the impending retirement of the baby boom generation, leave little room for anything short of high performance. This means acting as quickly and as efficiently as possible to value and allocate plan assets; to expedite the calculation of estimated benefits to reflect guarantee limits, as well as final benefit amounts; and to keep plan participants well-informed throughout the benefit determination process. Workers and retirees in terminated plans who stand to lose as much as one-half or more of their long-anticipated retirement income will likely have to make painful financial adjustments, and due consideration in helping to ease that pain is warranted. The calculation of benefits according to complicated provisions that vary by plan is a challenging task. It becomes more so with the delays that can occur in valuing the assets of large and complex plans and determining how those assets are to be allocated among different groups. However, the likelihood of lengthy processing for some plans is not unpredictable, and while PBGC has taken steps to expedite the processing of small and simpler plans, its approach to large and complex plans appears less than strategic. The hope of freeing up staff to handle complex plans by processing others more quickly will probably not be sufficient by itself for tackling difficult plans in the near future. Absent a calculated effort to anticipate and plan for such terminations, the heretofore modest number of beneficiaries caught in a protracted process could, indeed, grow in the next few years. While overpayments to those already in retirement have been infrequent, delays clearly exacerbate them. Moreover, the failure to communicate more often and clearly with participants awaiting a final determination can be disconcerting—especially when they receive the news that their final determination is “surprisingly” less than they anticipated, or when retirees learn that the estimated interim benefit they had been receiving was too high and that they owe money. PBGC’s long recoupment period—which can be even further elongated by an appeal—may be a consolation to such retirees; however, the agency itself stands to lose considerable sums under this policy. This is another peril for an agency that may well be dealing with an increasing number of plan failures. Clearer and more frequent communication with plan participants, including quicker and responsible adjustments to estimated benefits, more information about how their benefits are calculated, and where to find help if they wish to appeal, would better manage expectations, help people plan for their future, avoid unnecessary appeals, and earn good will in a trying time for all. To improve PBGC’s benefit determination process, a more strategic approach is needed to prepare for and manage the calculation of benefit amounts and communications with participants in cases involving large, complex plans. Specifically, we recommend that: PBGC should set goals for timeliness and monitor the progress made in finalizing benefit determinations for large, complex plans separately from other plans. To reduce the number and size of overpayments in large, complex plans, PBGC should prioritize the calculation of estimated benefits for retirees subject to the guarantee limits and adjust estimates, as needed, throughout the benefit determination process. To reduce increased overpayments due to appeals, PBGC should prioritize the processing of appeals for those already receiving benefits and should consider implementing the final benefit determination for retirees during the appeals process. PBGC should develop improved procedures for adapting and reviewing letters to participants in large, complex plans, such as by (1) providing more specific information in letters to participants who receive benefit reductions describing which limits were applied and why; (2) ensuring all letters to participants involving benefit reductions are reviewed for accuracy and coherence before being sent; and (3) establishing processes to more frequently communicate with participants who are experiencing delays in receiving final benefits determinations. PBGC should provide information or resources to help participants in large, complex plans better understand their benefit calculations and also to avoid any unnecessary appeals. Specifically, PBGC’s benefit determination letters should provide information, such as how participants can obtain additional information by using the Freedom of Information Act or other resources. We obtained written comments on a draft of this report from PBGC’s acting director, which are reproduced in appendix VIII. PBGC also provided technical comments, which are incorporated into the report where appropriate. In addition, we provided copies of the draft report to the Departments of Commerce, Labor, and Treasury. In response to our draft report, PBGC generally concurred with our recommendations and outlined actions the agency has under way or plans to take in order to address each topic of concern. With respect to the first recommendation, PBGC agreed and noted that the agency has started to implement steps for tracking and monitoring tasks associated with processing large, complex plans. While we are pleased to learn of these steps being initiated, we would like to emphasize the importance of setting goals for processing large, complex plans and reporting progress toward meeting those goals separately from other plans. With respect to the second recommendation, PBGC agreed and commented that it generally already identifies and prioritizes cases where adjustments to estimated benefits are likely, but will continue to look for ways to improve its processes. Moreover, despite possible legal concerns with implementing final benefit determinations prior to completion of the appeals process, the agency is willing to explore options for making earlier benefit adjustments, when appropriate. With respect to the third recommendation, PBGC agreed and noted that the agency is revising the guidelines for its benefit statements to better communicate the complexities of PBGC benefits and to better manage expectations of plan participants. The comments state that the agency will evaluate and make necessary modifications to its letter review process, as well as examine ways to more frequently and clearly communicate with participants experiencing delays in receiving final benefit determinations. Finally, with respect to the fourth recommendation, PBGC agreed to amend its appeals brochure to include information about accessing records through Freedom of Information Act requests. As agreed with your staff, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies of this report to the Acting Director of PBGC, the Secretary of Labor, the Secretary of the Treasury, and other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512- 7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IX. To assess the timeliness and results of the Pension Benefit Guaranty Corporation’s (PBGC) benefit determination process, we obtained automated data from PBGC on all plans terminated and trusteed during fiscal years 2000 through 2008, as well as data for all individuals associated with those plans. Three different data sets were provided: (1) a plan level data set, (2) an individual level data set with benefit data, and (3) an individual level data set with appeals data. The plan level data set, including 1,089 plans total, was comprised of three component groups: Group A - plans for which the valuation of assets and liabilities had been completed, as of February 2009 (909 plans). Group B – plans for which the valuation of assets and liabilities had not been completed, as of February 2009 (83 plans). When actual values were not yet available, estimated values for assets and/or liabilities were provided. The participant count for these plans was based on audited data. Group C – plans for which the valuation of assets and liabilities was not completed, as of February 2009 (97 plans). Estimated values for assets and/or liabilities were only available for some plans. The participant count for these plans was based on preliminary data. We analyzed the plan level data to determine the length of time it takes PBGC to complete the valuation of a plan’s assets and liabilities, on average. We also analyzed the plan level data to identify various plan characteristics, such as the fiscal year when trusteed by PBGC and the extent to which participants’ benefits are affected by legal guarantee limits. PBGC does not systematically track the number of participants affected by one or more of the three types of guaranteed benefit limits specified under the Employee Retirement Income Security Act of 1974 (ERISA) and related regulations—which include maximum, phase-in, and accrued-at-normal limits—or how much these limits affect participants’ benefit amounts. However, PBGC does systematically track each plan’s total benefit liabilities and the amount PBGC owes, taking into account the guarantee limits. The difference between these two amounts (referred to as the amount of “unfunded nonguaranteed benefits”) provides an indicator of the magnitude of the impact of guarantee limits on participants within each plan. If the amounts are the same, it means that no participants had benefits reduced due to these limits. If total liabilities are greater, it means that at least one participant had benefits reduced due to these limits. The individual level data set, with benefit data as of February 2009, included 1,487,679 individuals associated with 1,057,272 primary participants (the person who had earned the pension). The most common reasons for multiple individuals per case were situations where a portion of the pension was to be shared between the primary participant and another individual with a qualified domestic relations order (referred to in the data set as an “alternate payee”), or situations where a primary participant had died and the pension was being paid to a beneficiary. In our analyses, we aggregated the data so that the characterization of each case reflected the data for the primary participant, as well as all other individuals associated with that primary participant, as appropriate for the data element being analyzed. We analyzed the individual level data on benefits, by case, to determine the length of time it takes PBGC to make benefit determinations and the extent to which overpayments affect retirees’ benefits. To assess the time required for processing, we began by identifying all those participants whose benefit determinations had been completed. We then examined the length of time between the date the participant’s plan was trusteed and the date PBGC first issued a final benefit determination letter to the participant. (Subsequent benefit determination letters are sometimes issued when corrections are needed or when a participant successfully appeals.) For participants whose benefit determinations were still pending, we calculated the length of time between the plan’s trusteeship and February 18, 2009, when these data were provided, to determine how long the determinations had been awaiting completion. We also analyzed the length of time to process benefit determinations by participants’ retirement status at the time the plan terminated. To determine the proportion of participants possibly affected by overpayments, we first identified all those who had received estimated benefits and then compared the earliest available estimated benefit amount with the final benefit amount, by case, tabulating whether the difference was positive (indicating a likely overpayment) or negative (indicating a likely underpayment). Because estimated benefit amounts may be adjusted over time, and because the records on estimated benefits had sometimes been overwritten or deleted, we were not able to use these data to determine with certainty whether or not an overpayment or underpayment had been incurred, or the amounts involved. Instead, to assess the amount of overpayments incurred and the effect of repaying these debts on participants’ benefits, we analyzed the data on recoupments. First, we identified all those who were listed as having amounts recouped to date, by case. We then used the available data on projected benefit reductions, which included the amount of monthly reduction and the start date and end date for that reduction amount (sometimes involving up to four different reduction amounts) to calculate the amounts yet to be recouped. We determined the total amount of the overpayments, by case, by combining the data PBGC provided on amounts recouped to date with our calculation of amounts yet to be recouped. Based on a review of selected records in PBGC’s image processing system for cases with the largest overpayments, it appears that these data are reliable for identifying whether a case has an overpayment, but not as reliable for determining the total amount of overpayments. We were able to verify that the participant with the largest overpayment, according to our analysis of these data, was correct: an LTV participant with an overpayment of about $152,000. Also, we found that that the amounts calculated using these data were very close (within 2 percent) of the overpayment amounts in the records for 15 of the 24 cases reviewed— differences small enough to be explained by rounding. However, in the remaining 9 cases, the amounts calculated varied significantly from those in the records—some greater, some less. We investigated the 3 most egregious differences and found that all 3 were due to data entry errors in the PBGC data set. In 2 cases, PBGC officials told us that the end date for recoupment had been entered as 12/1/2099 by default, which was not correct. They said that they would implement a system fix to prevent inappropriate use of this default in the future. In the third case, we found that the monthly payment amount had been inadvertently entered as the monthly reduction amount. None of these errors had resulted in inaccurate payments to participants, since all involved future recoupment amounts. However, it appears that the reliability of these data for calculating total overpayment amounts is limited. We also analyzed the individual level data, by case, to identify various case characteristics, aggregating the data together for all individuals associated with the same case. These characteristics included the final benefit amount (with and without any benefit reduction due to recoupment), and the projected age of the youngest individual at the end of recoupment for cases with overpayments greater than $10,000. We then combined the plan level data and individual level data, by case, to determine the number of individuals and cases associated with each plan, and identify those plans with the most cases that took 4 or more years to provide a final benefit determination, and the most cases with overpayments. We also used these data to generate lists of cases for more detailed reviews of documents in PBGC’s image processing system and examine more closely the cases that took the longest to provide a benefit determination and that had the largest overpayments and benefit reductions. In addition to the automated data, PBGC maintains records that are individually scanned into an image processing system. The types of documents we reviewed in PBGC’s image processing system included both plan documents and participant records. On the plan level, we reviewed documents for the plans most affected by guarantee limits, by delays in processing, and by overpayments (see appendix VI). For the 10 plans ranking highest in each of these categories, we typically reviewed the “actuarial case memo,” which summarizes all the steps taken to obtain records and determine the value of assets and liabilities for each plan terminated and trusteed by PBGC. We then selected five of these plans for more detailed review of participant records in order to illustrate key trends identified in our analysis of the automated data. These five plans were: Bethlehem Steel, LTV Steel, RTI-United Steelworkers of America (USWA), US Airways, and Weirton Steel. For Bethlehem Steel, we randomly selected five participants from among those participants whose benefit determinations were still pending. For each of the other four plans, we randomly selected five participants from the lists of participants provided in the plans’ actuarial memos. Then, for each of these participants, we typically reviewed all letters sent to the participant, all benefit calculation documents, and the internal correspondence among PBGC staff about the case. We reviewed the letters to participants only to determine if they accurately conveyed information documented elsewhere in the files. We did not attempt to verify PBGC calculations of benefit amounts. Finally, to assess the length of time it takes PBGC to provide a decision when a participant appeals, we examined PBGC data on the average time to close docketed appeals and the average age of pending appeals, by fiscal year, 2000 through 2008. We also analyzed the 14,545 appeals-related correspondences associated with plans terminated and trusteed during fiscal years 2000 through 2008 so that we could make comparisons between PBGC’s average response time, both before and after its restructuring of the appeals process. Data reflect multiple correspondences associated with individual cases. For correspondences that were pending, we calculated the amount of time between when PBGC received these correspondences and April 13, 2009, when we received these data. To describe PBGC’s triaging system, which was implemented in fiscal year 2003, we analyzed PBGC’s action taken code for each correspondence and aggregated these results into two groups: those correspondences received during fiscal years 2000 through 2002, and those received during fiscal years 2003 through 2008. Of the 4,337 correspondences that were docketed as appeals, 3,495 had been decided as of April 2009. We then tabulated the data on the outcomes of closed appeals and the reasons why these appeals were closed, which were coded to indicate whether a change to the benefit amount occurred. American Federation of Labor and Congress of Industrial Organizations (AFL-CIO) (http://www.aflcio.org) A voluntary federation of 56 national and international labor unions, representing 11 million members in a variety of industries. Air Line Pilots Association (http://www.alpa.org) The largest airline pilot union in the world, representing nearly 54,000 pilots at 36 U.S. and Canadian airlines. Association of Flight Attendants-Communications Workers of America (http://www.afanet.org) The world’s largest flight attendant labor union, organized by flight attendants for flight attendants, representing over 55,000 flight attendants at 20 airlines. New England Pension Assistance Project (http://www.pensionaction.org/nepap.htm) One of the six regional projects funded by the Administration on Aging to provide free pension counseling services. Initially, the project served only Massachusetts residents, but in 1998, it expanded to help residents of the six-state New England region: Connecticut, Maine, Massachusetts, New Hampshire, Rhode Island, and Vermont. Ohio Pension Rights Center (http://www.proseniors.org/oh_pension.html) Part of one of the six regional projects funded by the Administration on Aging to provide free pension counseling services. The Ohio Pension Rights Center shares a grant with the Michigan Pension Rights Project and provides all types of pension assistance to people in Michigan, Ohio, Pennsylvania, Kentucky, and Tennessee. Pension Rights Center (http://www.pensionrights.org) Provides legal consultation and training to the six regional projects funded by the Administration on Aging to provide pension counseling services for individuals who need help in understanding and enforcing their pension and retirement savings plan rights. United Steelworkers (http://www.usw.org/) The largest industrial labor union in North America, representing 1.2 million current and retired workers in industries that include primary and fabricated metals, mining, chemicals, paper, glass, rubber, transportation, utilities, container industries, pharmaceuticals, call centers, and health care. Members of the Reliance Group Holdings Inc. plan and the Reliance Insurance Company plan, which were among the plans most affected by long processing times. Members of the Republic Technologies International USWA and USS/KOBE plans, which were among the plans most affected by the guarantee limits, long processing times, and/or overpayments. Members of United Air Lines ground employees plan and pilots’ plan. Upon the termination of a single-employer plan, plan assets are identified, valued, and then allocated to participant benefits, in accordance with the provisions in ERISA, section 4044. In addition to plan assets, any monies from company assets that PBGC recovers for unfunded benefit liabilities are allocated to participant benefits, in accordance with the provisions in ERISA, section 4022(c). The amount of plan assets available to pay for participant benefits includes all plan assets remaining after the subtraction of all prior or current liabilities paid or payable from the plan. This amount includes the value of the collectible portion of any due and unpaid employer contributions. Liabilities include expenses, fees and other administrative costs, and benefit payments due before the allocation date. For plans terminated and trusteed by PBGC, assets are valued and the allocation determined based on liabilities as of the termination date. Plan assets available to pay for benefits under the plan are allocated to participant benefits according to six priority categories, as described in Table 3. Assets are allocated to each priority category in succession, beginning with priority category 1. If the plan has sufficient assets to pay for all benefits in a priority category, the remaining assets are allocated to the next lower priority category. This process is repeated until all benefits in priority categories 1 through 6 have been provided or until all available plan assets have been allocated. Most private sector defined benefit plans do not require or allow participant contributions. Thus, in most trusteed plans, asset allocation begins with the benefits in priority category 3, that is, the benefits of those retired or eligible to retire 3 years before the plan terminated. However, it should be noted that assets are allocated based on type of benefit, not retirement status, and that many participants have benefits in more than one category. Except for priority category 5, which includes benefits subject to the phase-in limit, if the plan assets available for allocation to any priority category are insufficient to pay for all benefits in that priority category, those assets are distributed among the participants according to the ratio that the value of each participant’s benefit or benefits in that priority category bears to the total value of all benefits in that priority category. If the plan assets available for allocation to priority category 5 are insufficient to pay for all benefits in that category, the assets are allocated by date of plan amendment, oldest to newest, until all plan assets available for allocation have been exhausted. Within each priority category, once the amount of assets to be allocated to each participant has been determined, assets are allocated first to the participant’s “basic-type” benefits (which include benefits that are guaranteed by PBGC, or that would be guaranteed but for the maximum and phase-in limits), and then to the participant’s “nonbasic-type” benefits (which include all other benefits). Plan assets are distributed according to the process described above until all have been allocated. Thus, to the extent plan assets are available for allocation under this scheme, some participants may have some or all their nonguaranteed benefits paid. For example, in the scenario illustrated in figure 16, sufficient plan assets are available to cover all priority category 3 guaranteed and nonguaranteed benefits, as well as a portion of priority category 4 guaranteed benefits. PBGC would then pay the remaining guaranteed benefits in priority category 4, but all remaining benefits (that is, priority categories 5 and 6 benefits, which are all nonguaranteed benefits), would not be paid, and participants would have their benefits reduced accordingly, unless there are recoveries of company assets that can be allocated to benefits, as discussed below. Section 4022(c), added to ERISA in 1987, requires PBGC to share with participants a portion of its recoveries resulting from an employer liability claim against the plan sponsor and other liable parties, usually in bankruptcy. As a result, a portion of participants’ losses of unfunded nonguaranteed benefits can be paid. Where a plan’s unfunded nonguaranteed benefits exceed $20 million, the total amount paid under §4022(c) depends on PBGC’s actual recoveries in that case. In all other cases, the amount paid is determined by an average of PBGC’s recoveries over a 5-year period. PBGC allocates the participants’ portion of the §4022(c) amount, as described above, to participants’ unfunded nonguaranteed benefits using the same priority categories and procedures outlined above for the §4044 asset allocation process. The allocation begins with the highest priority category in which there are unfunded nonguaranteed benefits, and then to each lower priority category, in succession. If the plan §4022(c) amount to be allocated in a particular priority category is not sufficient to pay all the unfunded nonguaranteed benefits in that category, the amount will be allocated within the category as described above for the §4044 allocation process. As noted by one employee group we spoke with, it is more advantageous for participants for assets to be considered recoveries allocated under §4022(c) than plan assets allocated under §4044, because recoveries are shared between PBGC and participants. For example, to continue with the scenario introduced above, if company assets are recovered, some would be allocated to pay a portion of the guaranteed benefits in priority category 4 that PBGC would pay to participants regardless, and some would be allocated to pay a portion of priority category 5 nonguaranteed benefits that would not have been paid otherwise (see fig. 17). To help illustrate this process, we gathered data from plan records about § 4044 and § 4022(c) asset allocation for 10 large, complex plans. The results are summarized in table 4. The statutory and regulatory limits on guaranteed benefits can be difficult to understand for many participants. The following schematic distills the application of these limits into a series of questions, one for each type of limit: phase-in, accrued-at-normal, and maximum. We selected three terminated pension plans to profile as examples of large, complex plans: Bethlehem Steel, RTI (USWA), and US Airways (pilots) (see appendix VI). All three were among the 10 plans most affected by the guarantee limits. In addition, both Bethlehem Steel and RTI (but not US Airways) were among the 10 plans most affected by processing delays and by overpayments. Percentage of total unfunded nonguaranteed benefits ($8,522,175,078) National Steel Corporation (retirement plan) Kaiser Aluminum and Chemical Corp. (hourly plan) Kaiser Aluminum and Chemical Corp. (salaried plan) In addition to the contact named above, Blake L. Ainsworth, Assistant Director; Margie K. Shields, Analyst-in-Charge; Kristen W. Jones; and Wayne Turowski made significant contributions to this report. Joseph A. Applebaum, Jeffrey L. Bernstein, Susan C. Bernstein, Jena Y. Sinkfield, Walter K. Vance, and Craig H. Winslow also made important contributions. Pension Benefit Guaranty Corporation: Financial Challenges Highlight Need for Improved Governance and Management, GAO-09-702T (Washington, D.C.: May 20, 2009). Auto Industry: Summary of Government Efforts and Automakers’ Restructuring to Date, GAO-09-553 (Washington, D.C.: April 23, 2009). Defined Benefit Pensions: Survey Results of the Nation’s Largest Private Defined Benefit Plan Sponsors, GAO-09-291 (Washington, D.C.: March 30, 2009). Pension Benefit Guaranty Corporation: Improvements Needed to Address Financial and Management Challenges, GAO-08-1162T (Washington, D.C.: September 24, 2008). Pension Benefit Guaranty Corporation: Some Steps Have Been Taken to Improve Contracting, but a More Strategic Approach Is Needed, GAO-08-871 (Washington, D.C.: August 18, 2008). Defined Benefit Pensions: Plan Freezes Affect Millions of Participants and May Pose Retirement Income Challenges, GAO-08-817 (Washington, D.C.: July 21, 2008). Pension Benefit Guaranty Corporation: A More Strategic Approach Could Improve Human Capital Management, GAO-08-624 (Washington, D.C.: June 12, 2008). Pension Benefit Guaranty Corporation: Governance Structure Needs Improvements to Ensure Policy Direction and Oversight, GAO-07-808 (Washington, D.C.: July 6, 2007). Private Pensions: Timely and Accurate Information Is Needed to Identify and Track Frozen Defined Benefit Plans, GAO-04-200R (Washington, D.C.: December 17, 2003). Pension Benefit Guaranty Corporation Single-Employer Insurance Program: Long-Term Vulnerabilities Warrant “High Risk” Designation, GAO-03-1050SP (Washington, D.C.: July 23, 2003). Pension Plans: Benefits Lost When Plans Terminate, T-HRD-92-58 (Washington, D.C.: September 24, 1992).
As the insurer of over 29,000 private sector defined benefit plans, the Pension Benefit Guaranty Corporation (PBGC) may be required to assume responsibility for the plans of a growing number of companies filing bankruptcy due to the recession. Concerns about PBGC's benefit determination process, reductions in benefits due to guarantee limits, and workers' retirement security overall led the chairmen and ranking members of the Senate Health, Education, Labor, and Pensions Committee and the Senate Finance Committee, among others, to ask GAO to study: (1) how long it takes PBGC to make benefit determinations; (2) the extent of overpayments on retirees' benefits; (3) how well PBGC communicates with participants; and (4) the timeliness and accessibility of the appeals process. To conduct this study, GAO reviewed PBGC policies and procedures, analyzed automated data and case files, and interviewed PBGC officials and certain associations, participants, and their representatives from among those most affected by the process. GAO's review of plans terminated with insufficient funds and trusteed by PBGC during fiscal years 2000 through 2008 revealed that a small number of complex plans--especially those with large numbers of participants affected by limits on guaranteed benefit amounts--accounted for most cases with lengthy delays and overpayments. Processing times. PBGC completed most participants' benefit determinations in less than 3 years, but required more time--up to 9 years--to process determinations for complex plans and plans with missing data. Nearly three-quarters of the lengthiest processing times were associated with individuals in just 10 of the 1,089 plans reviewed. Although PBGC has taken steps to shorten this process, its initiatives do not address the longest delays. Overpayments. Although many participants are affected by sizable benefit reductions, the vast majority are not affected by overpayments. Moreover, nearly two-thirds of overpayments involved participants in just 10 plans. In cases with overpayments, PBGC's policy generally requires participants' benefits to be reduced by no more than 10 percent until the amount owed is repaid, but due to participants' ages, the full amount often is never recouped. Communication. PBGC has made efforts to improve communication, but key correspondence often did not meet the needs of those in complex plans. For example, when the process was lengthy, PBGC did not communicate with some participants for several years. When the benefit calculation was complicated, PBGC did not provide explanations that could be understood without further information or assistance. Appeals. Since restructuring the appeals process in 2003, PBGC has reduced the average amount of time needed to decide an appeal by almost a year. However, the agency does not readily provide key information that would be helpful to participants in deciding whether to pursue an appeal.
FCC regulates many aspects of television and radio station ownership. Laws and regulations limit the ownership of television stations, both nationwide and locally, and limit the ownership of radio stations locally. Since the 1970s, the number of media outlets has increased dramatically, with large increases in the number of television and radio stations; additionally, the number of broadcast networks has increased. More recently however, some segments of the media industry have undergone consolidation, with a few companies acquiring a significant number of outlets. Through provisions in the Communications Act of 1934, as amended, FCC regulates various aspects of television, radio, cable, and satellite service. FCC has three policy goals for media ownership: competition, diversity, and localism; in the case of diversity, FCC identified viewpoint, outlet, program, source, and minority and female diversity. On December 18, 2007, FCC took action on a number of items impacting media ownership. FCC revised its ban on the ownership of a newspaper and broadcast station in the same market. FCC set a cap on the number of subscribers that a cable operator can serve nationwide and sought comments on vertical ownership limits and cable and broadcast attribution rules. FCC also adopted rules to help new entrants and small businesses, including minority- and women-owned businesses with access to financing, such as modifying the commission’s construction permit deadlines, and adopted a notice of proposed rule making that, among other things, sought comment on how best to improve collection of data regarding the gender, race, and ethnicity of broadcast licenses. Finally, FCC adopted a report on broadcast localism and a notice of proposed rule making. Six restrictions on the ownership of television stations, radio stations, and broadcast networks follow: National television ownership cap. A single entity can own any number of television stations nationwide as long as the stations collectively reach no more than 39 percent of national television households. For purposes of calculating the 39 percent limit, ultra-high frequency (UHF) television stations are attributed with 50 percent of the television households in their market, which FCC refers to as the UHF discount. Local television ownership limit. A single entity can own two television stations in the same DMA if (1) the “Grade B” contours of the stations do not overlap or (2) at least one of the stations is not ranked among the top four stations in terms of audience share and at least eight independently owned and operating full-power commercial or noncommercial television stations would remain in the DMA. Local radio ownership limit. A single entity can own up to 5 commercial radio stations, not more than 3 of which are in the same service (that is, AM or FM), in a market with 14 or fewer radio stations, except that an entity can not own, operate, or control more than 50 percent of the stations in a market; up to 6 commercial radio stations, not more than 4 of which are in the same service, in a market with 15 to 29 radio stations; up to 7 commercial radio stations, not more than 4 of which are in the same service, in a market with 30 to 44 radio stations; and up to 8 commercial radio stations, not more than 5 of which are in the same service, in a market with 45 or more radio stations. Newspaper-broadcast cross-ownership ban. Following the effective date of a new approach released by FCC on February 4, 2008, the commission will presume that a proposed newspaper-broadcast transaction is in the public interest if it meets the following test: (1) the market at issue is one of the 20 largest DMAs; (2) the transaction involves the combination of only 1 major daily newspaper and only 1 television or radio station; (3) if the transaction involves a television station, at least 8 independently owned and operating major media voices would remain in the DMA following the transaction; and (4) if the transaction involves a television station, that station is not among the top 4 ranked stations in the DMA. All other proposed newspaper-broadcast transactions would be presumed not in the public interest. This new approach will replace an absolute ban, which prohibits a single entity from having common ownership of a full-power television or radio station and a daily newspaper if the television station’s “Grade A” contour or the radio station’s principal community service area completely encompasses the newspaper’s city of publication. Television-radio cross-ownership limit. A single entity can own up to 2 television stations (if permitted under the local television multiple ownership cap) and up to 6 radio stations (if permitted under the local radio multiple ownership cap) or 1 television station and 7 radio stations in a market with at least 20 independently owned media voices remaining post merger; up to 2 television stations and up to 4 radio stations in a market with at least 10 independently owned media voices remaining post merger; and 1 television station and 1 radio station regardless of the number of independently owned media voices. Dual network rule. A single entity can own multiple broadcast networks, but cannot own two or more of the top four networks (that is, ABC, CBS, FOX, and NBC). In its December 18, 2007, action, FCC adopted rules limiting the number of subscribers that a cable operator can serve nationwide. While FCC first set limits on the number of subscribers that a cable operator could serve in 1993 and later modified its rules in 1999, the Court of Appeals for the District of Columbia Circuit reversed and remanded those rules. FCC’s new rules set the number of subscribers that a cable operator can serve at 30 percent nationwide. Since the 1970s, the number of media outlets has increased dramatically, with large increases in the number of television and radio stations. In the case of television, the number of full-power television stations increased from 875 in 1970 to 1,754 in 2006; this increase occurred in both commercial and noncommercial educational television stations. Moreover, the number of broadcast networks that supply programming to stations across the country increased from three major networks (ABC, CBS, and NBC) to four major networks (ABC, CBS, FOX, and NBC) and several smaller networks, such as The CW Television Network, MY Network TV, and ION Television Network. In the case of radio, the number of full-power radio stations more than doubled, from 6,751 stations in 1970 to 13,793 stations in 2006, with increases in AM, FM, and FM educational stations. Daily newspapers illustrate a different trend—decreasing from 1,763 in 1970 to 1,447 in 2006. While the number of morning newspapers increased from 334 in 1970 to 833 in 2006, the number of evening newspapers decreased by more than half, from 1,429 to 614. Table 1 illustrates the trends in television and radio stations and newspapers. Since the 1970s, the number of households subscribing to a multichannel video program distributor (MVPD) has increased significantly, thereby increasing the programming options available to many households. The two most prominent MVPD platforms are cable and direct broadcast satellite (DBS) services. Since 1975, the number of households subscribing to cable service has increased from approximately 10 million to nearly 66 million in 2006, and since 1995, the number of households subscribing to DBS service has increased from 2.2 million to over 29 million in 2006. Table 2 illustrates the number of cable and DBS subscribers. According to FCC’s most recent report on cable industry prices, the average cable operator provided over 70 channels of programming, thereby expanding the programming options available to subscribers of these services. These nonbroadcast networks include a variety of national outlets—such as CNN, Discovery Channel, ESPN, and FOX News—as well as regional outlets—such as the California Channel, Comcast SportsNet Chicago, and New England Cable News. While the number of media outlets has increased, the ownership of outlets has evolved. In 1995, FCC eliminated the Financial Interest and Syndication Rules, which had limited the ability of broadcast networks to have ownership interest in programming broadcast on their network. Subsequently, the broadcast networks increasingly became affiliated with companies providing program production services. The Walt Disney Company acquired ABC, Viacom acquired CBS, and NBC joined forces with Universal Pictures. News Corporation—which launched the Fox Broadcasting Network in 1986—also owns several production studios, including 20th Century Fox. Each of the four major broadcast networks owns television stations that reach more than 20 percent of the nation’s television households. Other significant owners of television stations include ION Media Networks, Tribune Company, and Broadcasting Media Partners, Inc. Following passage of the 1996 Act, several companies acquired a large number of radio stations. Clear Channel owned over 1,000 radio stations throughout the United States, and Cumulus Broadcasting and Citadel Communications each owned over 200 stations. The cable industry also experienced evolution in the ownership of some properties. Cable operators, who distribute programming to subscribers, are pursuing a strategy of regional clustering; this strategy involves acquiring the cable systems throughout a geographic region. In its most recent report on video competition, FCC estimated that there were 118 clusters with approximately 51.5 million subscribers. Comcast and Time Warner Cable have emerged as the largest cable operators, with 26.8 and 16.6 million subscribers, respectively. While cable operators provide many nonbroadcast networks to their subscribers, many nonbroadcast networks are owned by cable operators or broadcast networks. For example, among the nonbroadcast networks with the most subscribers, CNN and TNT are affiliated with Time Warner, ESPN is affiliated with Disney, USA Network is affiliated with NBC-Universal, and Discovery Channel is affiliated with Cox, a large cable operator. On December 18, 2007, FCC adopted a further notice that seeks comment on vertical ownership limits and cable and broadcast attribution rules, including for example, the extent to which vertical integration can lead to foreclosure of entry by unaffiliated programmers. In recent years, some companies have taken steps to sell assets. In 2005, Viacom split into two separate companies: Viacom and CBS Corporation. The new Viacom includes many of the cable networks, such as MTV and Nickelodeon, and CBS Corporation includes the broadcast network and CBS television and radio stations. In 2006, The McClatchy Company acquired Knight Ridder, one of the nation’s largest newspaper companies, and subsequently sold 12 former Knight Ridder newspapers. For example, The Philadelphia Inquirer and Philadelphia Daily News, former Knight Ridder newspapers, are currently owned by Philadelphia Media Holdings LLC, a private company. Also in 2006, Clear Channel announced plans to sell 448 radio stations, all in markets outside the top 100, and its entire television station group. More recently, The New York Times Company sold its television stations and one of its radio stations. Alternatively, the two satellite radio companies—Sirius and XM—have proposed a merger that, if approved, would leave one company providing satellite radio service. Markets with large populations have more television, radio, and newspaper outlets than less populated markets. In more diverse markets, we also observed more radio and television stations and newspapers operating in languages other than English, which contributed to a greater number of outlets. Some companies participate in agreements to share content or agreements that allow one entity to produce programming or sell advertising through two outlets, among other arrangements. In our case study markets, these agreements were prevalent in a variety of markets, but not in the top three markets—New York, Los Angeles, and Chicago. Finally, we found that the Internet expands access to media content; however, we observed few news Web sites in our case study markets that were unaffiliated with traditional media outlets. Markets with large populations have more television, radio, and newspaper outlets than less populated media markets. Additionally, the presence of a large Hispanic population in the media market increases the number of outlets, as owners seek to provide Spanish-language outlets in addition to the full range of English-language outlets supported by the population level. The top three media markets—New York, New York (1); Los Angeles, California (2); and Chicago, Illinois (3)—have several attributes that set them apart from other markets. First, these markets have very large populations. Each of these markets has more than 3 million households. Second, these markets have very diverse populations. For example, New York is the largest African-American media market and the second-largest Asian and Hispanic media market, Los Angeles is the largest Asian and Hispanic media market and the sixth-largest African-American media market, and Chicago is the third-largest African-American media market and the fifth-largest Asian and Hispanic media market. Third, these markets generally have high average household disposable income; the New York market ranks fourth highest in the United States, the Los Angeles market ranks twenty-fourth, and the Chicago market ranks seventh. Finally, these markets also are the production and distribution points for much of the media content in the United States—from films, television shows, and radio programs to magazines and periodicals. The top three media markets differ qualitatively from other markets in the large and varied number of media outlets present in these markets. The combination of large populations and relatively high disposable income helps produce substantial advertising revenues for the media outlets in these markets. These markets have more television and radio stations and more newspapers than other media markets, and competition for cable service from overbuilders also is more likely in these markets. Since these markets have diverse populations, each market has numerous broadcast outlets that provide content in languages other than English. While Spanish is the most common language for non-English media, outlets for content in Chinese, Korean, and other languages are also present. Table 3 indicates how many outlets are located in the top three markets. FCC’s rules allow greater group ownership of media outlets in these three markets because of their size. There are four television duopolies— common ownership of two television stations—in New York, three duopolies in Los Angeles, and three duopolies in Chicago. In addition, several companies own multiple radio stations in these markets; FCC’s rules allow for common ownership of eight radio stations, no more than five of which can be in the same service (AM or FM) in these markets. There are some jointly owned newspaper and television stations and newspaper and radio stations in each of these markets. Even with the allowance for group ownership, these three markets still possess a great number of owners who each operate a single broadcast outlet in either radio or television in the respective market. Appendix II provides a more detailed description of the media ownership for all 16 case study markets. Of the four large markets we studied—Miami/Fort Lauderdale, Florida (16); Charlotte, North Carolina (26); Nashville, Tennessee (30); and Wilkes Barre/Scranton, Pennsylvania (53)—the Miami/Fort Lauderdale market has the most television stations and in this respect more closely resembles the top three media markets than the other large media markets. This is due to the large number of Spanish-language outlets present; the Miami/Fort Lauderdale area is the third-largest Hispanic media market in the United States. In addition to television stations, the Miami/Fort Lauderdale market has three daily newspapers, two of which are in Spanish. The other three media markets in this size category have fewer television stations. Only the Miami/Fort Lauderdale market had competition for cable service, which also is present in the top three markets. The number of outlets decreased markedly between the three larger markets in this category and Wilkes Barre/Scranton (the 53rd- largest market). We could not determine if this was due to a change in the number of outlets that can be supported between the 30th-largest market (Nashville) and the 53rd-largest market, the lack of a core urban area in Wilkes Barre/Scranton, or the relatively weak economy prevalent in Wilkes Barre/Scranton. See table 4 for the number of outlets in the large case study markets. The medium-size markets we analyzed are Tucson, Arizona (68); Springfield, Missouri (76); Chattanooga, Tennessee (86); Cedar Rapids/Waterloo/Iowa City/Dubuque, Iowa (89); and Myrtle Beach/Florence, South Carolina (105). Similar to the Miami/Fort Lauderdale market, the Tucson market has more television stations than the other case study markets in its size category, mainly because of the relatively large Hispanic population in this medium-size market. There are eight English-language television stations in Tucson, which is similar to the number in the other four medium-size markets. However, Tucson has a relatively large Hispanic population and therefore possesses a larger number of media outlets due to the presence of Spanish-language television and radio stations. Television markets which lack a dominant urban area and contain two or more large towns located some distance apart are often split into smaller radio markets. The Cedar Rapids/Waterloo/Iowa City/Dubuque DMA contains three Arbitron radio markets and the Myrtle Beach/Florence DMA is split into two separate Arbitron radio markets. See table 5 for the number of outlets in the medium-size case-study markets. The small markets we analyzed are Terre Haute, Indiana (151); Sherman, Texas/Ada, Oklahoma (161); Jackson, Tennessee (174); and Harrisonburg, Virginia (181). These small markets are characterized by significantly fewer media outlets—television stations, radio stations, and newspapers— than the larger markets. Table 6 illustrates the number of outlets in the small case study markets. Hence, for these markets, the conversion to digital broadcasting offers the possibility to improve the free, over-the-air choices to residents. Already, commercial television stations in Sherman/Ada and Harrisonburg use a second digital channel to provide the signal from a broadcast network that is not otherwise present in the market. For example, WHSV in Harrisonburg, an ABC affiliate, broadcasts the FOX network on one of the station’s digital channels. Some media companies participate in operating agreements that involve a partnership between two or more outlets. For example, some media companies participate in agreements to share content among several outlets. Other media companies participate in agreements wherein one company produces content or sells advertising through its own outlets and another company’s outlets. These operating agreements are referred to, either by industry participants or FCC’s rules, by a variety of names, including joint sales agreements, local marketing agreements, and time brokerage agreements. FCC’s attribution rules—which seek to identify those interests in or relationships to licensees that have a realistic potential to affect the programming decisions of licensees or other core operating functions—apply to several types of operating agreements. Additionally, the Newspaper Preservation Act of 1970 allows two competing newspapers in one community to merge some operations to help ensure the survival of both newspapers; the resulting arrangements are referred to as joint operating agreements. In our 16 case study markets, we found several instances of media companies participating in operating agreements. We found these agreements in a variety of markets but not in the top three markets, suggesting that market size may influence the benefits that companies realize through such agreements. We found television stations participating in operating agreements in five markets—Nashville, Wilkes Barre/Scranton, Springfield, Myrtle Beach/Florence, and Terre Haute. In Springfield, there were two operating agreements between television stations and in Wilkes Barre/Scranton there were three operating agreements between television stations. We also found operating agreements between radio stations in Harrisonburg and Nashville. Finally, in Tucson, the two competing daily newspapers participate in a joint operating agreement. In addition to formal operating agreements, media companies in a market often maintain informal content-sharing arrangements with each other. These most often cross different types of media, rather than occurring among competitors within the same industry segment. In our case study markets, we found a newspaper sharing articles with a television station; a newspaper sharing articles with a radio station in return for advertising spots; and a newspaper sharing journalists with a television station. In markets with common ownership of a radio or television station and a newspaper, such sharing of content and journalism resources occurred as a matter of course. We also found some contractual sharing of content between media outlets of the same type. Most often, one television station produced local news programs for other stations in the same market. To some extent, these operating agreements may reduce the number of independent outlets. For example, in Wilkes Barre/Scranton, we identified eight television stations. However, one owner of two stations participated in an agreement with a third station. Additionally, the remaining four television stations participated in two separate agreements—each agreement covering two stations. Thus, while there are eight television stations and seven owners in Wilkes Barre/Scranton, there are three loose commercial groupings in the market. Similarly, in Springfield, while there are six television stations, four stations participate in two separate agreements. This example suggests that the number of independently owned outlets in a given market might not always be a good indicator of how many independently produced local news or other programs are available in a market. The Internet delivers content from a virtually limitless supply of sources. For example, while residents of New York can read The New York Times, residents in Harrisonburg with access to the Internet also can read this publication. Most of the traditional media outlets—newspapers, radio stations, and television stations—in our case study markets maintain a Web site. This provides another means for residents to access the content of these outlets. However, we identified few news Web sites in our case study markets that were unaffiliated with the traditional media outlets. While there are many blogs and Web sites, when we spoke with stakeholders about assessing the number of “voices” in a media market, there was no consensus on how to count Internet outlets. Some stakeholders said that audience size was less important than the existence of many potential voices, while other stakeholders said that voices on the Internet mattered only when they reached an audience above a certain minimum size. Further, some stakeholders said that journalistic content was important, such as that arising from news gathering and investigations. While FCC collects data on the gender, race, and ethnicity of radio and television station owners every 2 years through its Ownership Report for Commercial Broadcast Stations, or Form 323, we found that these data have several weaknesses that undermine their usefulness for tracking and periodically reporting on the status of minority and women ownership. These weaknesses include (1) exemptions from filing for certain types of broadcast stations, such as noncommercial stations; (2) inadequate data quality procedures; and (3) problematic data storage and retrieval. Moreover, there are no other reliable government sources on the status of minority and women ownership. Nevertheless, the available evidence from industry stakeholders and experts we interviewed, as well as government and nongovernment reports, suggests that ownership of broadcast outlets by these groups is limited. We identified three primary barriers contributing to the limited levels of ownership by minorities and women. These barriers include (1) the large scale of ownership in the media industry, (2) a lack of easy access to sufficient capital for financing the purchases of stations, and (3) the repeal of the tax certificate program, which provided financial incentives for incumbents to sell stations to minorities. Diversity has been a long-standing policy goal of FCC, including ownership by minorities and women. In 1998, FCC issued rules to collect data on the gender, race, and ethnicity of broadcast licensees. FCC decided to collect these data through its Annual Ownership Report, or Form 323. FCC noted that it was appropriate to develop “precise information on minority and female ownership of mass media facilities” and “annual information on the state and progress of minority and female ownership,” thereby positioning “both Congress and the Commission to assess the need for, and success of, programs to foster opportunities for minorities and females to own broadcast facilities.” FCC began collecting these data in 1999. The Form 323 is the only mechanism through which FCC collects information on the gender, race, and ethnicity of broadcast owners. FCC requires all commercial AM and FM radio stations and television stations to report the gender, race, and ethnicity of each owner with an attributable interest on the Form 323. Owners and licensees must file the Form 323 every 2 years, whenever there is a transfer of control or assignment, or after the grant of a construction permit for a new commercial broadcast station. As FCC’s only information source on owners’ gender, race, and ethnicity, the Form 323 data potentially could be used to determine and periodically report on the level of minority and women broadcast ownership. However, we identified several weaknesses that limit the usefulness of the Form 323 data. Filing exemptions. Sole proprietors, partnerships, and noncommercial stations are not required to file the Form 323. Since the data from Form 323 do not include stations owned by sole proprietors, partnerships, or noncommercial stations, it is not possible to use the Form 323 data to identify either the full universe of broadcast stations owned by minorities and women or the number of minority and women owners. FCC also does not require the filing of the Form 323 for low-power stations. Data quality procedures. According to FCC officials, FCC does not verify or periodically review the gender, race, and ethnicity data submitted on the Form 323. According to these officials, a staff person from FCC’s Video Division reviews submitted Form 323s and this staff person focuses on ensuring compliance with the commission’s multiple ownership and citizen ownership rules. These officials told us that station owners were responsible for determining the accuracy of their Form 323 submissions. Should an error be found by the owner, FCC requires the owner to submit an additional Form 323. Data storage and retrieval. Companies must file the Form 323 electronically. However, FCC allows owners to provide attachments with their electronic filing of the Form 323. These attachments may include the gender, race, and ethnicity data. Since these data are not entered into the database, the data are unavailable for electronic query. Of further concern, the database retains all submitted Form 323s, even forms that contain incorrect information and have since been updated with a corrected Form 323. Thus, any aggregation or summary of the Form 323 records through electronic query is unreliable according to FCC officials. FCC has taken some steps to address concerns with the Form 323 data, but overall some weaknesses remain. According to FCC officials, FCC added an amendment process to the Form 323 interface, thereby allowing owners to modify information on a previously submitted Form 323. FCC also put in place edit checks that preclude owners from skipping questions, including questions on the owners’ gender, race, and ethnicity. However, FCC still allows attachments for Form 323s to be submitted and has no regular review mechanism for these attachments to determine if the owners provided correct information biennially as required. Moreover, there are no consequences for misfiling that would encourage accurate, complete, and timely submission of the Form 323. On December 18, 2007, FCC adopted a Notice of Proposed Rulemaking that seeks comment on how the commission can best improve its collection of data regarding the gender, race, and ethnicity of broadcast licensees. While reliable government data on ownership by minorities and women are lacking, ownership of broadcast outlets by these groups appears limited. According to the industry stakeholders and experts we interviewed, the level of ownership by minorities and women is limited. Recent studies generally support this conclusion. Three reports commissioned by FCC as part of its broadcast ownership proceeding found relatively limited levels of ownership of television and radio stations by minorities and women. Further, in a 2006 report, Free Press found that for full-power television stations, women and minority ownership was about 5 percent and 3 percent, respectively. Specifically, the report noted that women owned a majority stake in 67 of 1,349 full-power commercial television stations and minorities owned 44 stations, 9 of which were owned by one company. In another report, Free Press estimated that women owned approximately 629 of 10,506 (or 6 percent) of full-power radio stations and minorities owned 812 stations (or 8 percent) of full- power radio stations. According to prior government reports and industry stakeholders and experts we interviewed, three factors help explain the relatively small percentages of minority and women broadcast owners. Scale of ownership. In 2000, FCC and the National Telecommunications and Information Administration (NTIA) released separate reports suggesting that the current scale of ownership had been detrimental for minority and women ownership of broadcast outlets. In 2000, FCC commissioned a report that found industry deregulation in 1996 and the resulting consolidation had produced significant barriers to new entry and to the viability of small, minority- and women-owned companies. The report cited inflated station prices and disparate advertising revenues. NTIA’s report included similar observations about the impact of consolidation on station prices and advertising revenue. Industry representatives and experts we interviewed also identified the scale of ownership as a barrier for minorities and women. Thirty-six of 56 interviewees who mentioned barriers to ownership reported that the consolidation of broadcast ownership had been detrimental for minority and women ownership. According to these industry representatives and experts, the scale of current ownership mattered in several important ways. First, few stations are made available for purchase, limiting opportunities for the entry of new owners, such as minorities and women. Second, incumbent owners may prefer to trade stations with other incumbent owners rather than sell stations. Given the limited ownership by minorities and women today, trading does little to expand their ownership. Third, when stations become available for sale, investors and other financing entities prefer multiple station purchases rather than single station purchases in order to capture economies of scale. Like trading, such transactions favor incumbent companies that are well- established over new entrants such as minorities and women. Lastly, the scale of the industry affects the viability of current and prospective minority and women owners, since these owners must often compete with large conglomerate owners with sizable market share and greater resources. Access to capital. Both FCC’s and NTIA’s reports on minority and women ownership also included discussion and findings on the role of capital and the lack thereof for minorities and women. According to FCC’s commissioned report, access to capital was the barrier most often cited by study participants. The report found that banks often repeatedly rejected minority broadcast owners as applicants for a variety of reasons, ranging from racial discrimination to a lack of familiarity with the industry on the part of the bank. Similarly, NTIA’s report noted the importance of access to capital and described public and private sources of financing for minorities and women. The report concluded that despite these sources, access to capital continued to be a key concern. Industry stakeholders and experts we interviewed also mentioned the importance of access to capital and financing and the challenge it presents to minority and women ownership. Thirty-five of 56 interviewees reported that a lack of access to capital impeded greater entry by minorities and women into the broadcast industry. In particular, these industry representatives and experts described two ways in which the barrier posed by a lack capital is compounded by the nature of station sales and FCC rules. First, since stations generally do not advertise their properties for sale, individuals and companies looking to purchase a station must have cash on hand. Prospective buyers cannot wait for an announced sale and then acquire financing. This is a challenge for minority and women broadcasters, who often lack information on upcoming station sales and generally have fewer financial resources. Second, sellers are deterred from working with buyers who lack capital since any equity remaining in the station would be considered attributable interest under FCC’s rules. Retaining attributable interest in one property could make it difficult for these owners to buy different properties in the same market, due to FCC’s local ownership limits. Consequently, sellers would forgo working with prospective buyers who lack readily available capital rather than assume any risk to potential future acquisitions. Repeal of the tax certificate program. From 1978 to 1995, FCC operated a tax certificate program under section 1071 of the Internal Revenue Code that provided for the seller of a broadcast station to defer capital gains taxes on the sale if the station was sold to a minority-owned company. In 1995, the Congress repealed this program. During this period, FCC issued a total of 328 tax certificates for use in broadcast station transactions (285 for radio station sales and 43 for television station sales). Both FCC’s and NTIA’s reports on minority and women ownership cited the importance of the tax certificate as an incentive for incumbent broadcast owners to advertise and work with prospective minority buyers. FCC’s commissioned report described the tax certificate program as the “single most effective program in lowering market entry barriers and providing opportunities for minorities to acquire broadcast licenses in the secondary market.” NTIA’s report also found that the program fostered minority ownership. Many experts we interviewed also agreed that this program was important for promoting minority ownership. Twenty-five of 56 stakeholders we interviewed said that the elimination of the tax certificate program was a factor in the current limited level of minority- owned broadcast stations. Economic factors—including high fixed costs and the size of the market— influence the number of media outlets available in markets, the presence of operating agreements between outlets, and incentives for firms to consolidate their operations. Legal and regulatory factors appear to influence ownership of media outlets as well, by constraining the number and types of media outlets that a single entity can own. Lastly, technological factors, such as the emergence of the Internet, appear to facilitate entry by allowing entry with limited investment; however stakeholders’ opinions varied on the significance of these entrants on media markets. We found that fixed costs are prevalent in the media industry and are an important economic factor influencing the number and ownership of media outlets. Fixed costs refer to those costs that do not change with the number of units produced or sold. Fifty-two of 102 stakeholders we interviewed mentioned that high fixed costs are a factor influencing media ownership, and the academic literature also highlighted the importance of fixed costs. For example, in broadcast network television, two stakeholders reported that the fixed costs of producing 1 hour of programming range from $3 million to $5 million—regardless of how many viewers the programming attracts. Similarly for newspapers, the costs of purchasing a printing press and producing and editing news stories are not very sensitive to the number of copies a newspaper produces or sells. Stakeholders also reported high fixed costs for radio and television stations, cable television, and DBS. The size of the local market also is an important economic factor influencing the number and ownership of media outlets, since market size broadly determines an outlet’s potential for generating advertising revenues. For example, several stakeholders reported that although the costs of operating television and radio stations are similar regardless of market size, smaller markets have smaller audiences and fewer local advertisers for station operators to pursue. Accordingly, stakeholders reported that owners are less likely to sell stations in large markets than in smaller markets. According to data from Bear, Stearns and Company, Inc., 4 of the 137 television station transactions announced in the first two quarters of 2007 involved stations broadcasting in the top three markets. Conversely, stakeholders representing a large radio group owner and a national television broadcaster both reported that their companies are currently selling stations in smaller markets. Both high fixed costs and market size have implications for the number of outlets in a given market, the presence of operating agreements between outlets, and incentives for firms to consolidate their operations in local and national markets. Number of outlets. Market size and fixed costs influence the number of outlets in a market. The size of the market broadly determines the advertising revenues available to outlets in the market. In addition, costs in the media industry do not vary considerably between large and small markets. Therefore, large markets can generally support more outlets than small markets. Twenty-six interviewed stakeholders mentioned that the size of the market influences the number of outlets available, and 10 stakeholders reported that markets with larger populations and advertising revenues can support more media outlets and owners than smaller markets. For example, in New York—the largest market—we identified 21 television stations and 15 separate owners for those stations. In contrast, in Harrisonburg, Virginia—the smallest market in our review— we identified only 2 broadcast television stations and 2 separate owners, one of which was a public television station. Similarly, in the newspaper industry, several stakeholders reported that most newspaper markets can support only one daily newspaper because of high fixed costs in the industry. Accordingly, 9 of the 16 markets we evaluated had one daily newspaper, and 4 markets supported two newspapers. Operating agreements. The size of a local market and high fixed costs produce incentives for media outlets to enter into operating agreements with other local outlets. Specifically, we found that outlet owners in markets with smaller advertising revenues have incentives to enter into operating agreements with other outlets to spread fixed costs across multiple outlets to maximize their profitability. Thirty interviewed stakeholders reported that the size of the market can influence the need for such operating agreements. In our case study analyses, we identified 9 operating agreements between 17 television stations in 5 markets. We found these arrangements in a variety of markets, but not in the top three markets. It appears that medium-size markets are better suited to these arrangements than small markets because they offer a larger pool of potential outlet partners than would be available in smaller markets. Furthermore, two stakeholders reported that these agreements may increase the number of outlets available in a market by helping weaker stations remain in operation and by bringing new broadcasting networks into a market. Local and national consolidation. The combination of high fixed costs and market size also encourages media consolidation both within local markets and nationwide. Because competition for advertising revenues among radio and television broadcast stations occurs at the local level, nine stakeholders representing television, radio, and newspaper companies reported that their industries have incentives to consolidate operations across multiple outlets to reduce their fixed costs and claim a larger share of available advertising revenues. For example, six stakeholders reported that owning multiple stations in a local market allows a single owner to program its stations with diverse formats to reach a larger share of local listeners and provide multiple channels for advertisers. Media firms likewise have incentives to seek economies of scale and consolidate their operations nationally. For example, stakeholders reported that the cable industry has consolidated in recent years to cluster local systems into wider regional networks to reach larger audiences and serve a wider range of advertisers. Stakeholders also reported that serving a wider network of subscribers gives cable operators greater leverage in negotiating agreements to carry programming produced by broadcast and cable television networks. In the newspaper industry, one national newspaper company reported that it publishes almost 1,000 nondaily newspapers across the country, which enables the company to offer flexible advertising packages to national and local advertisers. In addition to economic factors, several legal and regulatory policies appear to have influenced media ownership, including local television and radio station ownership limits, the newspaper-broadcast cross ownership ban, and the 1996 Act. However, stakeholder perspectives varied on the extent to which the individual policies may have influenced current ownership. Local television and radio station ownership limits. FCC’s rules limit the number of radio stations, television stations, and combinations of radio and television stations that a single entity can own; as such, the rules influence the ownership of these media outlets. Twenty-three of 29 industry stakeholders we spoke with cited either the local radio or television limits as a factor influencing the ownership of media outlets. Several stakeholders reported that the local television ownership limit— which permits ownership of two television stations in larger markets— allows over-the-air television stations to better compete with other media outlets, including cable television and DBS providers, which have significantly more channels and air time to sell advertising. Several other stakeholders reported that this rule would be more beneficial if it were permitted in smaller markets to preserve struggling outlets, rather than in large markets where advertising revenues are greater. With regard to radio, several industry stakeholders reported that the local ownership caps limit consolidation in markets where companies were operating at the ownership limits. Newspaper-broadcast cross ownership ban. By limiting the markets where a single entity can have common ownership of a daily newspaper and a broadcast outlet, FCC’s rules affect the ownership of these media outlets. Stakeholders from three companies with newspaper holdings reported that the potential synergies and economic benefits to cross- ownership are overstated; two stakeholders reported that differences between television and newspaper cultures and products limit collaboration between the two platforms. On the other hand, three companies owning both newspapers and television stations in the same market reported that cross-ownership offers synergies such as improved sharing of resources and information between outlets. Similarly, stakeholders from two companies indicated that cross-ownership has helped their outlets produce more in-depth, local news than they would otherwise be able to provide. Telecommunications Act of 1996. The 1996 Act loosened restrictions on the ownership of radio stations—allowing greater ownership of local radio stations and eliminating nationwide limits on ownership of radio stations. Twenty of 45 nonbusiness stakeholders, such as academics, industry associations, and think tanks, identified the 1996 Act as a factor influencing media ownership; 9 of 57 business stakeholders similarly identified the 1996 Act. Three stakeholders reported that the changes in the 1996 Act brought capital, business expertise, and content diversity to the radio industry, as new entrants sought to invest in underfunded radio stations. Alternatively, several other stakeholders reported that the 1996 Act resulted in overconsolidation in the radio industry, as many small operators were bought out by conglomerate owners. New technologies appear to facilitate entry, thereby promoting new content and competition. In particular, the Internet provides new opportunities for individual citizens and companies to produce their own Internet publications with little investment. For example, individuals and companies no longer need to acquire a broadcast license and invest in broadcast facilities to distribute content to a wide audience. Forty-four stakeholders told us that the Internet creates an abundance of outlets, while only 17 disagreed. The Pew Internet & American Life Project, an Internet-focused research center, found that in 2003, “more than 53 million American adults had used the Internet to publish their thoughts, respond to others, post pictures, share files and otherwise contribute to the explosion of content available online.” Additionally, 67 of 102 stakeholders mentioned competition from new entrants from the Internet or new telecommunications services as a factor influencing media ownership. For example, six newspaper industry stakeholders reported that industry revenues have suffered from the availability of low-cost or free classified advertising services available on the Internet. While many stakeholders reported that the Internet creates an abundance of outlets, opinions varied as to the significance of these outlets. For example, several stakeholders cited increases in the number of outlets available on the Internet, such as blogs, but said there is little evidence that these outlets are widely read or are journalistic substitutes for newspapers. Similarly, several other stakeholders estimated that a significant portion of the content available on these Web sites originates from large, established media firms such as newspapers. The stakeholders we interviewed seldom agreed on proposed modifications to media ownership rules. However, most business stakeholders expressing opinions on these rules were more likely to report that they should be relaxed or repealed. In contrast, nonbusiness stakeholders who expressed opinions on the rules were more likely to report that the rules should be left in place or strengthened. Both business and nonbusiness stakeholders who expressed an opinion on the previously repealed tax certificate program supported either reinstating or expanding the program to encourage the sale of broadcast outlets to minorities. Newspaper-broadcast cross-ownership ban. As mentioned earlier, on December 18, 2007, FCC modified its rules to permit common ownership of a daily newspaper and broadcast outlet in some markets. Prior to FCC’s action, the stakeholders we spoke with were fairly evenly divided on whether FCC should modify its rule prohibiting cross-ownership of newspapers and broadcast outlets in the same local area. Of the 50 stakeholders expressing an opinion on the matter, 27 reported that the rule should be repealed and 23 said that the rule should either be left as is or strengthened. However, among business and nonbusiness stakeholders interviewed, there were clear differences in opinion on this issue. Fourteen of 20 nonbusiness stakeholders were in favor of strengthening or leaving the rule in place. In contrast, 21 of 30 business stakeholders were in favor of repealing the regulation. For example, 13 of 14 stakeholders from multisector media companies stated the rule should be repealed. Local television and radio ownership limits. Stakeholders were fairly evenly divided on whether FCC should alter rules limiting the number of broadcast television and radio stations a single entity can own in a local market. Of the 50 stakeholders expressing an opinion on the matter, 27 said that the rules should be repealed and 23 said that the rule should either be left as is or strengthened. However, opinions within stakeholder segments were more consistent. Fourteen of 19 nonbusiness stakeholders were in favor of strengthening or leaving the rules in place, while 22 of 31 business stakeholders were in favor of repealing the regulations. National television ownership cap. The majority (65 of 102) of stakeholders expressed no opinion on this issue. Of the 37 who did express an opinion, 22 said that the cap should be left as is or lowered, further restricting ownership, while 15 favored raising or repealing the cap. But these results differed for nonbusiness and business stakeholders. Whereas 11 of 15 nonbusiness stakeholders stated that the cap should be left as is or lowered, further restricting ownership, 11 of 22 business stakeholders indicated that the cap should raised or repealed. Reinstitution of minority tax certificate program. Of the 102 stakeholders interviewed, most (72) expressed no opinion as to whether the minority tax certificate program should be reinstated. However, among the 30 stakeholders who mentioned this issue, there was broad consensus in favor of reinstating some version of this program. Twenty-eight of these 30 stakeholders indicated that the program should be either reintroduced without changes or expanded, and 2 said that the program was not needed and should not be reinstated. The media serve an important function in American life through their role in disseminating news, information, and entertainment. Though media options vary by local market, the overall growth in the communications industry and the emergence of the Internet have provided unprecedented levels of media choices to the American public. At the same time, economic forces appear to encourage local and national consolidation and operating agreements that reduce the number of independent voices. Moreover, though smaller owners, including minorities and women, have opportunities to enter the media industry by way of Internet-based and niche publications, these groups continue to face long-standing challenges to the ownership of radio and television stations. Since 1999, FCC has collected data on the gender, race, and ethnicity of radio and television station owners. In undertaking this effort, FCC noted that it was appropriate to develop “precise information on minority and female ownership of mass media facilities” and “annual information on the state and progress of minority and female ownership,” thereby positioning “both Congress and the Commission to assess the need for, and success of, programs to foster opportunities for minorities and females to own broadcast facilities.” Yet, data weaknesses stemming from how the data are collected, verified, and stored limit the benefits of this effort. Further, more accurate and reliable data would allow FCC to better assess the impact of its rules and regulations and would enable the Congress to make more informed legislative decisions about issues such as whether to reinstate the tax certificate program. While FCC recently adopted a Notice of Proposed Rulemaking regarding its data on broadcaster licensee gender, race, and ethnicity, this process has only recently begun and its outcome is unclear. To more effectively monitor and report on the ownership of broadcast outlets by minorities and women, we recommend that the Chairman, FCC, identify processes and procedures to improve the reliability of FCC’s data on gender, race, and ethnicity so that these data can be readily used to accurately depict the level, nature, and trends in minority and women ownership, thereby enabling FCC and the Congress to determine how well FCC is meeting its policy goal of diversity in media ownership. We provided a draft of this report to FCC for its review and comment. FCC provided technical comments that we incorporated where appropriate. FCC did not provide comments on our recommendation. FCC’s written comments appear in appendix IV. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the date of this letter. At that time, we will send copies of this report to the appropriate congressional committees and to the Chairman of the Federal Communications Commission. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Contact information and major contributors to this report are listed in appendix V. To assess the current ownership of media outlets, we used a case study methodology. The case studies consisted of the largest city in 16 Nielsen Designated Market Areas (DMA). To select the case study DMAs, we used a stratified random sample. We obtained the 2007 list of DMAs from the Nielsen Media Research Web site. We stratified the DMAs by size and randomly selected four large, four medium-size, and four small markets for the case study analysis. We defined large markets as those with 500,000 to 3 million households; medium-size markets as those with 150,000 to 499,999 households; and small markets as those with fewer than 150,000 households. In addition to the 12 random selections, we selected the top three markets (New York, Los Angeles, Chicago) as a separate take-all stratum and selected Tucson as a test market in which to count outlets because of its large Hispanic population. Within each market, we counted the number of television stations, radio stations, newspapers, and multichannel video programming distributors (MVPD) and the owners of these outlets. We considered outlets available to residents in the largest city to avoid counting multiple outlets not present throughout the market; outlets not counted primarily consist of weekly suburban newspapers not published in the largest city. Below we discuss our approach to counting television stations, radio stations, newspapers, and multichannel video programming distributors. Television stations. We used the Warren Television and Cable Factbook: Online to count the number of full-power television stations located in each market. We included both commercial and noncommercial full-power television stations in our count of stations. We used company Web sites and ownership data that stations filed with the Federal Communications Commission (FCC) through its Form 323 to determine the ownership of the station. For commercial stations, we counted the owner as the ultimate legal entity on whose behalf the ownership was registered with FCC. This provided an accurate count of group ownership, as well as identified any multiple station ownership within a single market. Radio stations. We first determined the largest city located within each television market. Some selected case study markets had multiple core cities; we used Miami for the Miami/Fort Lauderdale DMA, Scranton for the Wilkes Barre/Scranton DMA, Florence for the Myrtle Beach/Florence DMA, and Cedar Rapids for the Cedar Rapids/Waterloo/Iowa City/Dubuque DMA. We used FCC data to determine the number of full-power commercial and noncommercial radio stations located within close listening distance of the city. Due to its large geographic area and sparse population, we identified the four most populous counties in the Sherman/Ada DMA and determined the number of radio stations located within 20 miles of the largest town in each county. We adopted this approach because the market is too small to have an Arbitron radio market. The Sherman/Ada market also is located between the Oklahoma City and Dallas/Fort Worth markets, so we used an atlas to ensure that the radio stations located within 20 miles of Ada, Oklahoma, and Sherman, Texas, both located on the geographical edge of the DMA, were physically located inside the Sherman/Ada DMA. Our methodology produced counts of radio stations that may not match the actual number of full-power radio stations located in a DMA for one or both of the following reasons. First, the DMA may contain more than one Arbitron radio market, such as in the Cedar Rapids/Waterloo/Iowa City/Dubuque DMA and we counted radio stations from only one radio market. Second, the DMA is geographically large and the number of full- power commercial and noncommercial radio stations located within close listening distance of the core city does not capture all of the radio stations. Newspapers. We used Bowker’s News Media Directory to identify the daily, weekly, ethnic, religious, and special interest publications whose area of dominant influence included the core urban area. We counted daily and weekly newspapers separately and combined the ethnic, religious, and special interest publications into the “other” category. We also surveyed Web sites of the major daily newspapers in the core urban area of each of our case study markets to determine if there were any additional publications not contained in Bowker’s News Media Directory. We also used the directory from the New American Media organization to identify additional ethnic publications available in New York, Los Angeles, Chicago, Miami, Charlotte, Nashville, and Chattanooga. This source did not list publications for the other case study markets. Fieldwork in the Nashville and Tucson markets turned up additional publications that were missing in our data source. Our data sources likely undercount small local weeklies and other types of independent journals. Multichannel video programming distributors. We obtained the list of cable operators in each state from FCC’s database of registered cable operators (http://www.fcc.gov/mb/engineering/liststate.html). We used the Warren Television and Cable Factbook to verify that a cable company listed in the FCC database provided service in the core urban area. If the market contained more than one urban area, we used the largest city (e.g., Miami). In addition to cable companies, we included both direct broadcast satellite companies (DirecTV and EchoStar). The minimum MVPD count a market could have with this methodology is three. Any number greater than three reflects the presence of a cable overbuilder or a telecommunications company that is offering subscription television services in the core urban area of the DMA. In addition to case studies, we reviewed the relevant literature and conducted interviews to assess the current ownership of media outlets. We identified studies through a general literature review. We interviewed agency officials at FCC and the National Telecommunications and Information Administration (NTIA) about media ownership policies. We also identified 102 stakeholders in academia, think tanks, nonprofits, and media companies and interviewed them to obtain their views on FCC’s ownership policies and issues affecting media ownership. We used the same structured interview for all interviewees and analyzed the content of the interview responses. To ensure consistent analysis of the interview responses, we had two reviewers independently apply the content analysis tool to each interview write-up and standardized the coding to ensure reliability. We cross-tabulated the interview content to determine patterns in responses and the extent to which interview subjects supported particular positions. To identify the economic, legal and regulatory, and technological factors affecting media ownership, we reviewed the relevant literature, studies, and regulations and conducted interviews. We obtained and analyzed data on the radio and television industries from Bear, Stearns and Company, Inc. We obtained data from the Census Bureau’s 2006 American Community Survey to study the economic and demographic characteristics of households in the metropolitan statistical area around the central city in each case study DMA. We obtained data on the average household effective buying income for each DMA from the Television Bureau of Advertising for all 210 DMAs; we also obtained the list of top 25 Hispanic, African-American, and Asian media markets from the bureau. We reviewed the relevant economic literature and studies on media ownership. We reviewed relevant legislation and FCC notices, orders, and reports to assess legal factors. We also obtained information on economic, legal and regulatory, and technological factors from industry stakeholders as part of the structured interview process. To describe the levels of minority and women ownership of broadcast outlets, to identify factors that help explain these levels, and to assess FCC’s data collection efforts, we reviewed relevant reports, interviewed agency officials and industry stakeholders, and analyzed FCC’s forms and processes. To describe the levels of minority and women ownership of broadcast outlets, and factors affecting their ownership, we interviewed industry stakeholders, FCC and NTIA officials, and members of FCC’s Advisory Committee on Diversity for Communications in the Digital Age. We also reviewed relevant reports prepared by or for FCC, NTIA, and nongovernment organizations (such as Free Press). To determine FCC’s data collection efforts, we reviewed the relevant regulatory forms (such as FCC’s Form 323), and FCC documents and commissioned reports. Additionally, we interviewed FCC officials responsible for collecting the Commission’s data on broadcast ownership about the completeness and quality of the information in their databases. Because of inadequacies in the FCC ownership data, evidence suggesting limited ownership of media outlets by minorities and women comes from stakeholder opinions, as well as studies commissioned by FCC and prepared by NTIA and nongovernment organizations. We conducted this performance audit from February 2007 through March 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. To study the nature and level of media ownership, we randomly selected 12 case study markets, including 4 from each of three market strata— large, medium, and small. In addition, we selected the three largest markets as a separate stratum and judgmentally selected an extra market from the medium stratum (Tucson) to test our methodology for data collection and structured interviews. Information about the markets we selected for case study analysis appears in table 7. These media markets account for about 20 percent of all television households in the United States. For information about our methodology for counting outlets in each media market, see appendix I. The New York City designated market area (DMA) is the largest media market in the United States, with over 7 million households. This media market comprises 13 counties in northern New Jersey, 1 county in southwest Connecticut, 1 county in northeastern Pennsylvania, and 14 counties in New York, including all those on Long Island and the five boroughs. The New York City DMA is the largest African-American media market, the second-largest Asian media market, and the second-largest Hispanic media market in the United States. In terms of average household disposable income, the New York City DMA ranks fourth highest in the United States, making it a very attractive market for a broadcast media outlet. The four major networks (ABC, CBS, FOX, and NBC) own and operate their local broadcast television affiliates in New York. (In smaller television markets, the major networks are less likely to own and operate their local affiliates.) One company owns three television stations, all of which broadcast in Spanish. There are seven noncommercial Public Broadcasting Service (PBS) affiliates, several independent stations, and three Spanish-language network affiliates among the remaining broadcast television stations. The owners of radio outlets in the New York media market include several large national media companies with five or six outlets each and 28 entities with a single outlet in the market. Therefore, about two-thirds of the owners in the New York City DMA are operating a single radio outlet. The majority of the radio stations in this market broadcast in English, several broadcast in Spanish, and one broadcasts in Cantonese. In the aggregate, the 73 radio stations in the New York media market provide listeners with a wide variety of content. There is more cross-ownership of newspapers and broadcast outlets in the New York market than in other media markets, but there is also more diversity in specialty publications. News Corporation owns The Wall Street Journal, The New York Post, and two broadcast television stations (WWOR and WNYW); The New York Times Company owns The New York Times daily newspaper and a radio station (WQXR); and Tribune Company owns one Spanish-language daily newspaper (Hoy) and one television station (WPIX). New York is the center of the publishing industry in the United States and far more specialty publications are located here than in other media markets. The Los Angeles DMA is the second-largest media market in the United States, with over 5.6 million households. It includes eight counties in southern California and stretches from the Pacific Coast east to Nevada. The Los Angeles DMA is the largest Hispanic media market, the largest Asian media market, and the sixth-largest African-American media market in the United States. This media market ranks 24th in the nation for average household disposable income, the lowest ranking among the three largest media markets. The four major networks (ABC, CBS, FOX, and NBC) own and operate their local broadcast television affiliates in the Los Angeles DMA, just as they do in the New York City DMA. One of these networks owns three stations in this market while two other networks each own two stations. Los Angeles has more broadcast television stations than New York; more stations broadcast in Spanish and there are two Asian-language stations in Los Angeles. The Los Angeles DMA is the largest radio market in the country. According to advertising data for 2005, radio stations in the Los Angeles DMA competed for over $1 billion of advertising revenue, exceeding the advertising revenue for the next largest (New York) and the third-largest (Chicago) radio advertising markets by over $200 million and about $500 million, respectively. The Los Angeles DMA is somewhat more concentrated: one owner has reached the FCC cap of eight stations and another is close to the cap with seven stations. By contrast, no radio station owner in New York has reached the eight-station cap. Fewer stations are operated by a single owner in Los Angeles (20) than in New York (28), yet over half the Los Angeles station owners (about 59 percent) operate a single station. The large number of radio stations in Los Angeles provides a wide variety of content, and about a quarter of the stations broadcast in a language other than English, including 13 in Spanish, 3 in Korean, and 2 in Chinese. Stations located in Mexico and in neighboring U.S. media markets can also be heard in all or portions of the Los Angeles DMA, enhancing the market’s diversity. Fewer newspapers are located in Los Angeles than in the New York DMA, yet the number is still large compared with other media markets. There is one instance of cross-ownership: Tribune Company owns the Los Angeles Times and a broadcast television station (KTLA) in this market. The Chicago DMA is the third-largest media market in the United States, with over 3.4 million households. It contains 11 counties in northern Illinois and 5 counties in northwest Indiana. Chicago is the third-largest African-American media market, the fifth-largest Asian market, and the fifth-largest Hispanic media market in the United States. The Chicago DMA has the seventh-highest average household income in the nation. The Chicago DMA has eight fewer broadcast television stations than the Los Angeles DMA; there are four fewer Spanish-language, no Asian- language, and three fewer independent English-language stations in Chicago. The Chicago DMA also has five fewer television stations than the New York DMA. Radio outlet ownership is similar to that in Los Angeles— there are 2 owners that operate seven stations each in this market. Radio ownership is characterized by a few companies operating near the FCC ownership limit while 27 of the 38 owners (or 71 percent) own and operate a single radio station. Chicago has one instance of cross-ownership—under a waiver from FCC, Tribune Company owns two daily newspapers, the Chicago Tribune and Hoy, a Spanish-language daily; a television station, WGN; and a radio station, WGN-AM, in this media market. After the top three markets, we defined large media markets as those with between 500,000 and 3 million households. There are 59 media markets in this size category, ranging from Philadelphia (4) to Tulsa (62). We randomly selected four of these markets for case study analysis. The Miami/Fort-Lauderdale DMA includes 1.5 million households, making it the 16th-largest media market in the nation. It is the 3rd-largest Hispanic media market in the United States, after New York and Los Angeles; the 10th-largest African-American media market; and the 23rd-largest Asian media market. The average household disposable income for the DMA ranks 33rd in the nation, yet the advertising revenue for radio ranks 11th. Although the Miami/Fort Lauderdale DMA has about 2 million fewer households than the Chicago DMA, both markets support 16 broadcast television outlets. In Miami, these include affiliates of the eight primary English-language commercial networks, five Spanish-language stations, and three public television stations. Three of the English-language affiliates are owned and operated by two of the four major television networks. Commercial television stations that are owned and operated by the major networks are characteristic of our large case study media markets. Sixteen, or two-thirds, of the radio station owners operate a single outlet in the Miami/Fort Lauderdale DMA—the same proportion as in the New York City DMA. One company owns seven radio stations. This city supports two Spanish-language daily newspapers in addition to the English-language newspaper. Among the other newspapers available to residents are several Spanish-language weeklies and an African-American- focused weekly. Charlotte is the 26th-largest media market in the United States, with over 1 million households. It is the 16th-largest African-American media market, but is not among the top 25 Hispanic or Asian media markets in the United States. The broadcast television market here includes a higher percentage of noncommercial stations and more duopolies—one company owning two stations—than do the case study media markets described thus far. Specifically, 4 of the DMA’s 12 broadcast television stations, or 33 percent, are noncommercial and are affiliated with PBS. Among the 8 commercial television stations, there are two duopolies, and 2 of the noncommercial PBS stations have one owner, a local university. All of the commercial stations broadcast in English, meaning that residents in this DMA who desire media content in languages other than English must subscribe to a cable or satellite television service. In terms of radio advertising revenue, Charlotte is the 29th-largest market. Fewer radio stations are located within listening distance of the core city limits than in Miami or Nashville. Ownership of radio outlets is more concentrated in Charlotte than in the larger markets. There are 17 owners, including 1 large national media company that owns 7 radio stations in this market and 10 commercial and noncommercial owners that each operates a single station in the market. The Nashville DMA is the 30th-largest media market in the United States, with over 940,000 households. Covering 49 counties in Tennessee and Kentucky, this DMA covers a wide geographic area. Nashville is not among the top 25 media markets for African Americans, Asians, or Hispanics. In terms of average household disposable income, the DMA ranks 43rd among all media markets in the United States, and it ranks lower for television advertising revenue than for population size. Two of the Nashville DMA’s 12 broadcast television stations are noncommercial public television stations. One company owns 2 of the commercial broadcast television stations and has a local service agreement to run a third station with another outlet owner. All 12 television stations broadcast in English. With 52 radio outlets, Nashville has more radio stations than Miami, Charlotte, and Wilkes Barre/Scranton, the other three case study media markets in our large-size category, and radio ownership is much less concentrated. Twenty-four, or 69 percent, of the station owners operate a single station in this market; two companies own five stations each and the remaining nine companies own the rest. One of the group owners with five radio stations also has a joint sales agreement with a radio station owned by another company in Nashville. The Wilkes Barre/Scranton DMA covers 17 counties in northeastern Pennsylvania and is the 53rd-largest media market in the country, with over 590,000 households. This DMA is unusual in that it has no large core city, but rather a series of large- and medium-size towns located in the valleys of this mountainous region. The DMA ranks 148th in the nation for average household disposable income. With about 350,000 fewer households than the Nashville DMA, the Wilkes Barre/Scranton DMA also has fewer broadcast television stations and radio stations. The DMA contains seven commercial stations, all of which are broadcast network affiliates, and one PBS affiliate. There are no full- power independent television stations in this media market. Difficulties in the local economy—and a television advertising market that, according to industry sources we spoke with, is smaller than warranted for a DMA of its population size—have encouraged cost-sharing arrangements between the commercial broadcast television stations. Two of the stations have a single owner, and this owner has a local service agreement with a third station. Two other stations have a local service agreement under which they share everything except programming and finances. The remaining two commercial television stations have a joint sales agreement. Thus, the seven commercial television stations in this DMA operate in three loose commercial groupings. Ten of the 14 owners of radio outlets in the Wilkes Barre/Scranton DMA, or 71 percent, own and operate a single radio station in the market. One company owns five stations, another owns four stations, and two companies own the remaining five stations. Because of the mountainous terrain in this DMA, rebroadcasting of other stations’ signals occurs frequently. After the top three markets and large markets, we defined medium-size media markets as those containing from 150,000 to 499,999 households. There are 86 media markets in this size category, ranging from Lexington (63) to Salisbury (148). We randomly selected 4 of these markets for case study analysis. In addition, before making our random selection, we judgmentally selected the Tucson, Arizona, DMA as a test market for our data collection and structured interview methodology because of its large Hispanic population. The Tucson DMA is the 70th-largest media market in the United States, containing over 433,000 households. It is also the 25th-largest Hispanic media market in the United States, with over 115,000 Hispanic households. This DMA ranks 74th for average household disposable income. The Tucson DMA includes six commercial television stations affiliated with English-language networks, three commercial television stations affiliated with Spanish-language networks, and two public television stations. There are two duopoly owners of commercial stations—one of English-language stations and one of Spanish-language stations. Radio outlet ownership is relatively concentrated in Tucson, with one media company operating six radio stations in this market and two media companies operating five stations each. In total, 7 owners operate more than one station and 10 owners, or 59 percent, operate a single station in the market. The Tucson DMA has two daily newspapers, the Arizona Daily Star and the Tucson Citizen. They operate together under a joint operating agreement allowed by the Newspaper Preservation Act of 1970. The Springfield, Missouri, DMA includes 31 counties in Missouri and Arkansas and is the 76th-largest media market in the country, with just over 402,000 households. In terms of average household disposable income, this DMA ranked 183rd out of 210 media markets in the United States. Five commercial television broadcasting stations and one public broadcasting television station serve this DMA. Two of the commercial television stations have a local service agreement under which they share everything in their business operations except programming and finances, and another two commercial stations operate together under a shared service agreement. Five companies own 20 radio outlets, including two companies with 5 radio stations each. Six owners each operate a single radio station in this market. One company controls the primary daily newspaper in the DMA and one of the weeklies in Springfield itself. The Chattanooga DMA covers 17 counties in Tennessee, Georgia, and North Carolina; includes over 347,000 households; and is the 86th-largest media market in the United States. This market supports six commercial television broadcasting stations, all affiliated with a commercial network, and two public television broadcasting stations. Five radio outlet owners control 17 radio stations, including two owners that control 4 stations each. Fifteen owners, or 75 percent, each operate a single station. There is no cable overbuilder in this DMA. The Cedar Rapids/Waterloo/Iowa City/Dubuque DMA includes 21 counties in Iowa, contains over 333,000 households, and is the 89th-largest media market in the country. Like the Wilkes Barre/Scranton DMA, the Cedar Rapids/Waterloo/Iowa City/Dubuque DMA does not contain a single core urban area. However, unlike the Wilkes Barre/Scranton DMA, this DMA is subdivided among three radio markets. To ensure comparability with other case study media markets, we counted stations located in the Cedar Rapids radio market because Cedar Rapids has the largest population of the four towns. The Cedar Rapids/Waterloo/Iowa City/Dubuque DMA supports more broadcast television outlets than comparably populated media markets. There are six affiliates of national broadcast networks, two public television stations, and one independent television station, all of which broadcast in English. Two large national radio companies own 6 radio stations each, and 6 of the 11 radio outlet owners in the DMA, or 55 percent, each operate a single radio station. There is one daily newspaper in Cedar Rapids. The Myrtle Beach/Florence DMA consists of eight counties in South Carolina and southeastern North Carolina. With over 272,000 households, this DMA is the 105th-largest media market in the United States, and in terms of average household disposable income, it ranks 176th out of 210 media markets. This television market DMA contains two medium-size towns that are geographically separated. Florence is the more populous of the two, so we counted the radio stations and newspapers located in this town. The Myrtle Beach/Florence media market has six broadcast television outlets and five owners. The duopoly owner is an educational association that operates two public television stations. Two commercial television stations operate under a local marketing agreement that enables them to share fixed operating costs. In the Florence radio market (the larger of the two radio markets in the Myrtle Beach/Florence DMA), there are four owners of a single station and two group owners. One group station owner controls five stations in this market, while the other controls four stations. The smallest media markets are those with fewer than 150,000 households. There are 61 media markets in this size category, ranging from Palm Springs, California (149), to Glendive, Montana (210). We randomly selected 4 of these markets for case study analysis. The Terre Haute DMA includes five counties in eastern Illinois and nine counties in western Indiana. There are fewer than 145,000 households in this market, making it the 151st-largest media market, and in terms of average household disposable income, it ranks 174th out of 210 media markets. Three commercial television stations and two public television stations operate in this market. Two of the commercial stations operate under a joint operating agreement that allows them to share operating costs. As noted, cost-sharing arrangements also existed in the other four case study markets where we found a large difference between the population rank and the average household disposable income rank. Three owners operate 10 radio stations in this market, including two owners that operate 4 stations each, and eight owners each operate a single radio station in this market. The Sherman/Ada DMA contains 10 counties in southern Oklahoma and 1 county in northern Texas. Sherman is the largest community within this media market, with about 37,000 residents. This media market contains just over 124,000 households and is the 161st-largest media market. This market contains a higher proportion of Native American residents than any of our other case study markets. Although there are two broadcast television stations in this market, there is no public television station. The two commercial stations are local affiliates of two different major broadcast networks, and one of these stations carries a third major broadcast network on its second digital signal. Residents of this DMA who own a digital television thus have free access to three of the four major broadcast networks. While a distinct television market, the Sherman/Ada DMA does not constitute a separate radio market. Six owners operate more than one radio station in this market, including one owner that operates four stations and two owners (one of whom is the Chickasaw Nation) that operate three stations each. Seven owners each operate a single radio station in this market. The Jackson DMA includes the town of Jackson and six counties in Tennessee to the east and northeast of Memphis. With just over 95,000 households, this media market is the 174th-largest in the nation. The DMA has two commercial broadcast television stations, both of which are local affiliates of major networks, and one public television station. Neither of the two commercial television stations broadcasts a second major network on its second digital signal. Five radio station owners operate more than one station, including two companies that operate four stations each and another that operates three stations in this market. Six owners each operate a single radio station in this market. With just over 87,000 households, the Harrisonburg DMA is the smallest media market we selected for case study analysis. Located northwest of Richmond, this DMA is the 181st-largest media market in the country and comprises two counties in Virginia and one county in West Virginia. This market contains one commercial television station and one public television station. The commercial television station is an affiliate of a major broadcast network for its analog signal, but it broadcasts programming from two other broadcast networks and its analog affiliate on its digital signals. Residents of this DMA who have a digital television thus have free access to the programming of three broadcast networks. Four radio station owners operate more than one station, including one company that operates five stations and another that operates four stations in this media market. Three owners each operate a single station in this market. We conducted interviews with the following individuals and representatives from the following organizations. Individuals making key contributions to this report include Michael Clements (Assistant Director), Carl Barden, Matt Barranca, Steve Brown, Ted Burik, Elizabeth Eisenstadt, Brandon Haller, Madhav Panwar, Friendly Vang-Johnson, and Mindi Weisenbloom.
The media industry plays an important role in educating and entertaining the public. While the media industry provides the public with many national choices, media outlets located in a local market are more likely to provide local programs that meet the needs of residents in the market compared to national outlets. This report reviews (1) the number and ownership of various media outlets; (2) the level of minority- and women-owned broadcast outlets; (3) the influence of economic, legal and regulatory, and technological factors on the number and ownership of media outlets; and (4) stakeholders' opinions on modifying certain media ownership laws and regulations. GAO conducted case studies of 16 randomly sampled markets, stratified by population. GAO also interviewed officials from the Federal Communications Commission (FCC), the Department of Commerce, trade associations, and the industry. Finally, GAO reviewed FCC's forms, processes, and reports. The numbers of media outlets and owners of media outlets generally increase with the size of the market; markets with large populations have more television and radio stations and newspapers than less populated markets. Additionally, diverse markets have more outlets operating in languages other than English, contributing to a greater number of outlets. Some companies participate in operating agreements wherein two or more media outlets might, for example, share content. As such, these agreements may suggest that the number of independently owned media outlets might not always be a good indicator of how many independently produced local news and other programs are available in a market. Finally, the Internet is expanding access to media content and competition. On a biennial basis, FCC collects data on the gender, race, and ethnicity of broadcast owners to, according to FCC, position itself and the Congress to assess the need for, and success of, programs to foster minority and women ownership. However, these data suffer from three weaknesses: (1) exemptions from filing for certain types of broadcast stations, (2) inadequate data quality procedures, and (3) problems with data storage and retrieval. These weaknesses limit the benefits of this data collection effort. While reliable government data are lacking, available evidence suggests that ownership of broadcast outlets by minorities and women is limited. Several barriers contribute to the limited levels of ownership by these groups, including a lack of easy access to sufficient capital. A variety of economic, legal and regulatory, and technological factors influence media ownership. Two economic factors--high fixed costs and the size of the market--appear to influence the number of media outlets in a market, the incentive to consolidate, and the prevalence of operating agreements. By limiting the number and types of media outlets that a company can own, various laws and regulations affect the ownership of media outlets. Technological factors, such as the emergence of the Internet, have facilitated entry for new companies, thereby increasing the amount of content and competition. Stakeholders expressed varied opinions on modifications to media ownership rules. Most business stakeholders expressing an opinion on various media ownership rules were more likely to report that the rules should be relaxed or repealed. In contrast, nonbusiness stakeholders who expressed an opinion on the rules were more likely to report that the rules should be left in place or strengthened. Both business and nonbusiness stakeholders who expressed an opinion on a previously repealed tax certificate program supported either reinstating or expanding the program to encourage the sale of broadcast outlets to minorities.
Prior to September 11, 2001, emergency preparedness and response had primarily been the responsibility of state and local governments and had focused principally on emergencies resulting from nature, such as fires, floods, hurricanes, and earthquakes, or accidental acts of man, not acts of terrorism. The federal government’s role in supporting emergency preparedness and management prior to September 11 was limited primarily to providing resources before large-scale disasters like floods, hurricanes, and earthquakes, and response and recovery assistance after such disasters. However, after September 11 and the concern it engendered about the need to be prepared to prevent, mitigate, and respond to acts of terrorism, the extent of the federal government’s financial support for state and local government emergency preparedness and response grew enormously, with about $11 billion in grants distributed from fiscal years 2002 through 2005. At the same time the federal government has been developing guidance and standards for state and local first responders in the areas of incident management and capabilities and tying certain requirements to the award of grants. The nation’s emergency managers and first responders have lead responsibilities for carrying out emergency management efforts. First responders have traditionally been thought of as police, fire fighters, emergency medical personnel, and others who are among the first on the scene of an emergency. However, since September 11, 2001, the definition of first responder has been broadened to include those, such as public health and hospital personnel, who may not be on the scene, but are essential in supporting effective response and recovery operations. The role of first responders is to prevent where possible, protect against, respond to, and assist in the recovery from emergency events. First responders are trained and equipped to arrive at the scene of an emergency and take immediate action. Examples include entering the scene of the event and assessing the situation, setting up a command center, establishing safe and secure perimeters around the event site, evacuating those within or near the site, tending to the injured and dead, transporting them to medical care centers or morgues, rerouting traffic, helping to restore public utilities, and clearing debris. Last year, GAO issued a special report on 21st Century Challenges, examining the federal government’s long-term fiscal outlook, the nation’s ability to respond to emerging forces reshaping American Society, and the future role of the federal government. Among the issues discussed was homeland security. In our report we identified the following illustrative challenges and questions for examining emergency preparedness and response in the nation: Defining an acceptable, achievable (within budget constraints) level of risk. The nation can never be completely safe; total security is an unachievable goal. Therefore, the issue becomes what is an acceptable level of risk to guide homeland security strategies and investments, particularly federal funding? What criteria should be used to target federal and state funding for homeland security in order to maximize results and mitigate risk within available resource levels? What should be the role of federal, state, and local governments in identifying risks—from nature or man—in individual states and localities and establishing standards for the equipment, skills, and capacities that first responders need? Are existing incentives sufficient to support private sector protection of critical infrastructure the private sector owns, and what changes might be necessary? What is the most viable way to approach homeland security results management and accountability? What are the appropriate goals and who is accountable for the many components of homeland security when many partners and functions and disciplines are involved? How can these actors be held accountable and by whom? What costs should be borne by federal, state, and local governments or the private sector in preparing for, responding to, and recovery from disasters large and small—whether the acts of nature or the deliberate or accidental acts of man? To what extent and how should the federal government encourage and foster a role for regional or multistate entities in emergency planning and response? These issues are enormously complex and represent a major challenge for all levels of government. But the experience of Hurricane Katrina illustrated why it is important to tackle these difficult issues. Katrina was a catastrophe of historic proportions in both its geographic scope—about 90,000 square miles—and its destruction. Its impact on individuals and communities was horrific. Katrina highlighted the limitations of our current capacity to respond effectively to catastrophic events—those of unusual severity that almost immediately overwhelm state and local response capacities. Katrina gives us an opportunity to learn from what went well and what did not go so well and improve our ability to respond to future catastrophic disasters. It is generally accepted that emergency preparedness and response should be characterized by measurable goals and effective efforts to identify key gaps between those goals and current capabilities, with a clear plan for closing those gaps and, once achieved, sustaining desired levels of preparedness and response capabilities and performance. The basic goal of emergency preparedness for a major emergency is that first responders should be able to respond swiftly with well-planned, well-coordinated, and effective actions that save lives and property, mitigate the effects of the disaster, and set the stage for a quick, effective recovery. In a major event, coordinated, effective actions are required among responders from different local jurisdictions, levels of government, and nongovernmental entities, such as the Red Cross. Essentially, all levels of government are still struggling to define and act on the answers to four basic, but hardly simple, questions with regard to emergency preparedness and response: 1. What is important (that is, what are our priorities)? 2. How do we know what is important (e.g., risk assessments, performance standards)? 3. How do we measure, attain, and sustain success? 4. On what basis do we make necessary trade-offs, given finite resources? There are no simple, easy answers to these questions, and the data available for answering them are incomplete and imperfect. We have better information and a sense of what needs to be done for some types of major emergency events than others. For some natural disasters, such as regional wildfires and flooding, there is more experience and therefore a better basis on which to assess preparation and response efforts and identify gaps that need to be addressed. California has experience with earthquakes, and Florida has experience with hurricanes. However, no one in the nation has experience with such potential catastrophes as a dirty bomb detonated in a major city. Nor is there any recent experience with a pandemic that spreads to thousands of people rapidly across the nation, although both the AIDS epidemic and SARS provide some related experience. Planning and assistance has largely been focused on single jurisdictions and their immediately adjacent neighbors. However, well-documented problems with first responders from multiple jurisdictions to communicate at the site of an incident and the potential for large scale natural and terrorist disasters have generated a debate on the extent to which first responders should be focusing their planning and preparation on a regional and multi-governmental basis. The area of interoperable communications is illustrative of the general challenge of identifying requirements, current gaps in the ability to meet those requirements and assess success in closing those gaps, and doing this on a multi-jurisdictional basis. We identified three principal challenges to improving interoperable communications for first responders: clearly identifying and defining the problem; establishing national interoperability performance goals and standards that balance nationwide standards with the flexibility to address differences in state, regional, and local needs and conditions; and defining the roles of federal, state, and local governments and other entities in addressing interoperable needs. The first, and most formidable, challenge in establishing effective interoperable communications is defining the problem and establishing interoperability requirements. This requires addressing the following questions: Who needs to communicate what (voice and/or data) with whom, when, for what purpose, under what conditions? Public safety officials generally recognize that effective interoperable communications is the ability to talk with whom you want, when you want, when authorized, but not the ability to talk with everyone all of the time. Various reports, including ours, have identified a number of barriers to achieving interoperable public safety wire communications, including incompatible and aging equipment, limited equipment standards, and fragmented planning and collaboration. However, perhaps the fundamental barrier has been and is the lack of effective, collaborative, interdisciplinary, and intergovernmental planning. The needed technology flows from a clear statement of communications needs and plans that cross jurisdictional lines. No one first responder group or governmental agency can successfully “fix” the interoperable communications problems that face our nation. The capabilities needed vary with the severity and scope of the event. In a “normal” daily event, such as a freeway accident, the first responders who need to communicate may be limited to those in a single jurisdiction or immediately adjacent jurisdictions. However, in a catastrophic event, effective interoperable communications among responders is vastly more complicated because the response involves responders from the federal government—civilian and military—and, as happened after Katrina, responders from various state and local governments who arrived to provide help under the Emergency Management Assistance Compact (EMAC) among states. These responders generally bring their own communications technology that may or may not be compatible with those of the responders in the affected area. Even if the technology were compatible, it may be difficult to know because responders from different jurisdictions may use different names for the same communications frequencies. To address this issue, we recommended that a nationwide database of all interoperable communications frequencies, and a common nomenclature for those frequencies, be established. Katrina reminded us that in a catastrophic event, most forms of communication may be severely limited or simply destroyed—land lines, cell phone towers, satellite phone lines (which quickly became saturated). So even if all responders had had the technology to communicate with one another, they would have found it difficult to do so because transmission towers and other key supporting infrastructure were not functioning. The more comprehensive the interoperable communications capabilities we seek to build, the more difficult it is to reach agreement among the many players on how to do so and the more expensive it is to buy and deploy the needed technology. And an always contentious issue is who will pay for the technology—purchase, training, maintenance, and updating. Effective preparation and response requires clear planning, a clear understanding of expected roles and responsibilities, and performance standards that can be used to measure the gap between what is and what should be. It also requires identifying the essential capabilities whose development should be a priority, and capabilities that are useful, but not as critical to successful response and mitigation in a major emergency. What is critical may cut across different types of events (e.g., incident command and communications) or may be unique to a specific type of event (e.g., defusing an explosive device). DHS has undertaken three major policy initiatives to promote the further development of the all-hazards emergency preparedness capabilities of first responders. These include the development of the (1) National Response Plan (what needs to be done to manage a nationally significant incident, focusing on the role of federal agencies); (2) National Incident Management System (NIMS), a command and management process to be used with the National Response Plan during an emergency event (how to do what needs to be done); and (3) National Preparedness Goal (NPG), which identifies critical tasks and capabilities (how well it should be done). The National Response Plan’s (NRP) stated purpose is to “establish a comprehensive, national, all-hazards approach to domestic incident management across a spectrum of activities including prevention, preparedness, response, and recovery.” It is designed to provide the framework for federal interaction with state, local, and tribal governments; the private sector; and nongovernmental organizations. The Robert T. Stafford Disaster Relief and Emergency Assistance Act, as amended, established the process for states to request a presidential disaster declaration in order to respond to and recover from events that exceed state and local capabilities and resources. Under the NRP and the Stafford Act, the role of the federal government is principally to support state and local response activities. A key organizational principle of the NRP is that “incidents are typically managed at the lowest possible geographic, organizational, and jurisdictional level.” An “incident of national significance” triggers federal support under the NRP; a second “catastrophic incident” trigger allows for accelerated federal support. All catastrophic incidents are incidents of national significance, but not vice-versa. The basic assumption of the federal government as supplement to state and local first responders was challenged by Katrina, which (1) destroyed key communications infrastructure; (2) overwhelmed state and local response capacity, in many cases crippling their ability to perform their anticipated roles as initial on-site responders; and (3) destroyed the homes and affected the families of first responders, reducing their capacity to respond. Katrina almost completely destroyed the basic structure and operations of some local governments as well as their business and economic bases. “any natural or manmade incident, including terrorism, that results in extraordinary levels of mass casualties, damage, or disruption severely affecting the population, infrastructure, environment, economy, national morale, and/or government functions. A catastrophic incident could result in sustained national impacts over a prolonged period of time; almost immediately exceeds resources normally available to State, local, tribal, and private-sector authorities in the impacted area; and significantly interrupts governmental operations and emergency services to such an extent that national security could be threatened. All catastrophic incidents are Incidents of National Significance. These factors drive the urgency for coordinated national planning to ensure accelerated Federal/national assistance.” Exactly what this means for federal, state, and local response has been the subject of recent congressional hearings on Katrina and the recently issued report by the Select Bipartisan Committee to Investigate the Preparation for and Response to Hurricane Katrina. Homeland Security Presidential Directive 5 required the adoption of NIMS by all federal departments and agencies and that federal preparedness grants be dependent upon NIMS compliance by the recipients. NIMS is designed as the nation’s incident management system. The intent of NIMS is to establish a core set of concepts, principles, terminology, and organizational processes to enable effective, efficient, and collaborative emergency event management at all levels. The idea is that if state and local firsts responders implement NIMS in their daily response activities, they will have a common terminology and understanding of incident management that will foster a swift and effective response when emergency responders from a variety of levels of government and locations must come together to respond to a major incident. As we noted in our report on interoperable communications, such communications are but one important component of an effective incident command planning and operations structure. Homeland Security Presidential Directive 8 required DHS to coordinate the development of a national domestic all-hazards preparedness goal “to establish measurable readiness priorities and targets that appropriately balance the potential threat and magnitude of terrorist attacks and large- scale natural or accidental disasters with the resources required to prevent, respond to, and recover from them.” The goal was also to include readiness metrics and standards for preparedness assessments and strategies and a system for assessing the nation’s overall preparedness to respond to major events. To implement the directive, DHS developed the National Preparedness Goal using 15 emergency event scenarios, 12 of which were terrorist related, whose purpose was to form the basis for identifying the capabilities needed to respond to a wide range of emergency events. Some state and local officials and experts have questioned whether the scenarios were appropriate inputs for preparedness planning, particularly in terms of their plausibility and the emphasis on terrorist scenarios (12 of 15). The scenarios focused on the consequences that first responders would have to address. According to DHS’s National Preparedness Guidance, the planning scenarios are intended to illustrate the scope and magnitude of large-scale, catastrophic emergency events for which the nation needs to be prepared. Using the scenarios, and in consultation with federal, state, and local emergency response stakeholders, DHS developed a list of over 1,600 discrete tasks, of which 300 were identified as critical tasks. DHS then identified 36 target capabilities to provide guidance to federal, state, and local first responders on the capabilities they need to develop and maintain. That list has since been refined, and DHS released a revised draft list of 37 capabilities in December 2005 (see appendix I). Because no single jurisdiction or agency would be expected to perform every task, possession of a target capability could involve enhancing and maintaining local resources, ensuring access to regional and federal resources, or some combination of the two. However, DHS is still in the process of developing goals, requirements, and metrics for these capabilities; and DHS is reassessing both the National Response Plan and the National Preparedness Goal in light of the Hurricane Katrina experience. Prior to Katrina, DHS had established seven priorities for enhancing national first responder preparedness: implementation of NRP and NIMS; implementation of the Interim National Infrastructure Protection Plan; expanding regional cooperation; strengthening capabilities in interoperable communications; strengthening capabilities in information sharing and collaboration; strengthening capabilities in medical surge and mass prophylaxis; and strengthening capabilities in detection and response for chemical, biological, radiological, nuclear, and explosive weapons. Those seven priorities are incorporated into DHS’s fiscal year 2006 homeland security grant guidance. The guidance also adds an eighth priority that emphasizes emergency operations and catastrophic planning. With almost any skill and capability, experience and practice enhance proficiency. For first responders, exercises—particularly for the type or magnitude of events for which there is little actual experience—are essential for developing skills and identifying what works well and what needs further improvement. Major emergency incidents, particularly catastrophic incidents, by definition require the coordinated actions of personnel from many first responder disciplines and all levels of government, plus nonprofit organizations and the private sector. It is difficult to overemphasize the importance of effective interdisciplinary, intergovernmental planning, training, and exercises in developing the coordination and skills needed for effective response. Following are some illustrative tasks needed to prepare for and respond to a major emergency incident that could be tested with realistic exercises: assessing potential needs, marshalling key resources, and moving property and people out of harm’s way prior to the actual event (where predictable or where there is forewarning), obtaining and communicating accurate situational data for evaluating and coordinating appropriate response during and after the event; leadership: effectively blending (1) active involvement of top leadership in unified incident command and control with (2) decentralized decision making authority that encourages innovative approaches to effective response; clearly understood roles and responsibilities prior to and in response to effective communication and coordination; and the ability to identify, draw on, and effectively deploy resources from other governmental, nonprofit, and private entities for effective response For exercises to be effective in identifying both strengths and areas needing attention, it is important that they be realistic, designed to test and stress the system, involve all key persons who would be involved in responding to an actual event, and be followed by honest and realistic assessments that result in action plans that are implemented. In addition to relevant first responders, exercise participants should include, depending upon the scope and nature of the exercise, mayors, governors, and state and local emergency managers who would be responsible for such things as determining if and when to declare a mandatory evacuation or ask for federal assistance. The Hurricane PAM exercise of 2004 was essentially a detailed planning exercise that was highly realistic and involved a wide variety of federal, state, and local first responders and officials. Although action plans based on this exercise were still being developed and implemented when Katrina hit, the exercise proved to be remarkably prescient in identifying the challenges presented if a major hurricane hit New Orleans and resulted in flooding the city. The importance of post-exercise assessments is illustrated by a November 2005 report by the Department of Homeland Security’s Office of Inspector General on the April 2005 Top Officials 3 Exercise (TOPOFF3) which noted that the exercise highlighted at all levels of government a fundamental lack of understanding regarding the principles and protocols set forth in the NRP and NIMS. For example, the report cited confusion over the different roles and responsibilities performed by the Principal Federal Officer (PFO) and the Federal Coordinating Officer (FCO). The PFO is designated by the DHS Secretary to act as the Secretary’s local representative in overseeing and executing the incident management responsibilities under HSPD-5 for incidents of national significance. The PFO does not direct or replace the incident command system and structure, and does not have direct authority over the senior law enforcement officials, the FCO, or other federal and state officials. The FCO is designated by the President and manages federal resources and support activities in response to disasters and emergencies declared by the President. The FCO is responsible for coordinating the timely delivery of federal disaster assistance and programs to the affected state, the private sector, and individual victims. The FCO also has authority under the Stafford Act to request and direct federal departments and agencies to use their authorities and resources in support of state and local response and recovery efforts. In addition to confusion over the respective roles and authority of the PFO and FCO, the report noted that the exercise highlighted problems regarding the designation of a PFO and the lack of guidance on training and certification standards for PFO support personnel. The report recommended that DHS continue to train and exercise the NRP and NIMS at all levels of government and develop operating procedures that clearly define individual and organizational roles and responsibilities under the NRP. In the last several years, the federal government has awarded some $11 billion in grants to federal, state, and local authorities to improve emergency preparedness, response, and recovery capabilities. What is remarkable about the whole area of emergency preparedness and homeland security is how little we know about how states and localities (1) finance their efforts in this area, (2) have used their federal funds, and (3) are assessing the effectiveness with which they spend those funds. The National Capital Region (NCR) is the only area in the nation that has a statutorily designated regional coordinator. In our review of emergency preparedness in the NCR, we noted that a coordinated, targeted, and complementary use of federal homeland security grant funds was important in the NCR—as it is in all areas of the nation. The findings from our work on the NCR are relevant to all multiagency, multijurisdictional efforts to assess and improve emergency preparedness and response capabilities. In May 2004, we reported that the NCR faced three interrelated challenges: the lack of (1) preparedness standards (which the National Preparedness Goal was designed to address); (2) a coordinated regionwide plan for establishing first responder performance goals, needs, and priorities, and assessing the benefits of expenditures in enhancing first responder capabilities; and (3) a readily available, reliable source of data on the funds available to first responders in the NCR and their use. Without the standards, a regionwide plan, and data on spending, we noted, it is extremely difficult to determine whether NCR first responders have the ability to respond to threats and emergencies with well-planned, well- coordinated, and effective efforts that involve a variety of first responder disciplines from NCR jurisdictions. To the extent that the NCR had coordinated the use of federal grant funds, it had focused on funds available through the Urban Area Security Initiative grants. We noted that it was important to have information on all grant funds available to NCR jurisdictions and their use if the NCR was to effectively leverage regional funds and avoid unnecessary duplication. As we observed, the fragmented nature of the multiple federal grants available to first responders—some awarded to states, some to localities, some directly to first responder agencies—may make it more difficult to collect and maintain regionwide data on the grant funds received and the use of those funds. Our previous work suggests that this fragmentation in federal grants may reinforce state and local fragmentation and can also make it more difficult to coordinate and use those multiple sources of funds to achieve specific objectives. A new feature in the fiscal year 2006 DHS homeland security grant guidance for the Urban Area Security Initiative (UASI) grants is that eligible recipients must provide an “investment justification” with their grant application. States must use this justification to outline the implementation approaches for specific investments that will be used to achieve the initiatives outlined in their state Program and Capability Enhancement Plan. These plans are multiyear global program management plans for the entire state homeland security program that look beyond federal homeland security grant programs and funding. The justifications must justify all funding requested through the DHS homeland security grant program, including all UASI funding, any base formula allocations for the State Homeland Security Program and the Law Enforcement Terrorism Prevention Program, and all formula allocations under the Metropolitan Medical Response System and Citizen Corps Program. In the guidance DHS notes that it will use a peer review process to evaluate grant applications on the basis of the effectiveness of a state’s plan to address the priorities it has outlined and thereby reduce its overall risk. On February 1, 206, GAO issued its preliminary observations regarding the preparation for and response to Hurricane Katrina. Catastrophic events are different in the severity of the damage, number of persons affected, and the scale of preparation and response required. They quickly overwhelm or incapacitate local and/or state response capabilities, thus requiring coordinated assistance from outside the affected area. Thus, the response and recovery capabilities needed during a catastrophic event differ significantly from those required to respond to and recover from a “normal disaster.” Key capabilities such as emergency communications, continuity of essential government services, and logistics and distribution systems underpin citizen safety and security and may be severely affected. Katrina basically destroyed state and local communications capabilities, severely affecting timely, accurate damage assessments in the wake of Katrina. Whether the catastrophic event comes without warning or there is some prior notice, such as a hurricane, it is essential that the leadership roles, responsibilities, and lines of authority for responding to such an event be clearly defined and effectively communicated in order to facilitate rapid and effective decision making, especially in preparing for and in the early hours and days after the event. Streamlining, simplifying, and expediting decision making must quickly replace “business as usual.” Yet at the same time, uncoordinated initiatives by well-meaning persons or groups can actually hinder effective response, as was the case following Katrina. Katrina raised a number of questions about the nation’s ability to respond effectively to catastrophic events—even one with several days warning. GAO has underway work on a number of issues related to the preparation, response, recovery, and reconstruction efforts related to Hurricanes Katrina and Rita. We are examining what went well and why and what did not go well and why, and what our findings suggest for any specific changes that may be needed. Assessing, developing, attaining, and sustaining needed emergency preparedness, response, and recovery capabilities is a difficult task that requires sustained leadership, the coordinated efforts of many stakeholders from a variety of first responder disciplines, levels of government, and nongovernmental entities. There is a no “silver bullet,” no easy formula. It is also a task that is never done, but requires continuing commitment and leadership and trade-offs because circumstances change and we will never have the funds to do everything we might like to do. The basic steps are easy to state but extremely difficult to complete: develop a strategic plan with clear goals, objectives, and milestones; develop performance goals that can be used to set desired performance collect and analyze relevant and reliable data; assess the results of analyzing those data against performance goals to guide priority setting; take action based on those results; and monitor the effectiveness of actions taken to achieve the designated performance goals. It is important to identify the specific types of capabilities, such as incident command and control, with broad application across emergencies arising from “all-hazards,” and those that are unique to particular types of emergency events. The priority to be given to the development of specific, “unique” capabilities should be tied to an assessment of the risk that those capabilities will be needed. In California, for example, it is not a question of if, but when, a major earthquake will strike the state. There is general consensus that the nation is at risk of an infectious pandemic at some point, and California has just issued a draft plan for preparing and responding to such an event. On the other hand, assessing specific terrorist risks is more difficult. As the nation assesses the lessons of Katrina, we must incorporate those lessons in assessing state and local emergency management plans, amend those plans as appropriate, and reflect those changes in planned expenditures and exercises. This effort requires clear priorities, hard choices, and objective assessments of current plans and capabilities. Failure to address these difficult tasks directly and effectively will result in preparedness and response efforts that are less effective than they should and can be. That concludes my statement, and I would be pleased to respond to any questions the Commission Members may have. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
This testimony discusses the challenges of effective emergency preparedness for, response to, and recovery from major emergencies, including catastrophic incidents. Effective emergency preparedness and response for major events requires the coordinated planning and actions of multiple players from multiple first responder disciplines, jurisdictions, and levels of government as well as nongovernmental entities. Effective emergency preparedness and response requires putting aside parochialism and working together prior to and after an emergency incident. September 11, 2001 fundamentally changed the context of emergency management preparedness in the United States, including federal involvement in preparedness and response. The biggest challenge in emergency preparedness is getting effective cooperation in planning, exercises, and capability assessment and building across first responder disciplines and intergovernmental lines. DHS has developed several policy documents designed to define the federal government's role in supporting state and local first responders in emergencies, implement a uniform incident command structure across the nation, and identify performance standards that can be used in assessing state and local first responder capabilities. Realistic exercises are a key component of testing and assessing emergency plans and first responder capabilities, and the Hurricane PAM planning exercise demonstrated their value. With regard to the status of emergency preparedness across the nation, we know relatively little about how states and localities (1) finance their efforts in this area, (2) have used their federal funds, and (3) are assessing the effectiveness with which they spend those funds. Katrina has raised a host of questions about the nation's readiness to respond effectively to catastrophic emergencies. Effective emergency preparedness is a task that is never done, but requires continuing commitment and leadership because circumstances change and continuing trade-offs because we will never have the funds to do everything we might like to do.
Many land use authorities currently exist that permit the Secretary of Defense, the secretaries of the military departments, or both to make more efficient use of underutilized or not utilized real property under their jurisdiction or control, such as authorities permitting outleasing or conveyance of real property controlled by DOD or the issuance of licenses, permits, or easements upon real property controlled by DOD. The services reported that one of the most commonly used authorities is Section 2667 of Title 10. Under this authority, the secretaries of the military departments generally have the authority to lease nonexcess real property under the control of the respective department in exchange for cash or in-kind consideration not less than the fair market value of the lease interest. Leases executed pursuant to this authority must comply with several conditions; for example, a lease may not be for more than 5 years unless the secretary concerned determines that a lease for a longer period will promote the national defense or be in the public interest. Money received from leases entered into pursuant to Section 2667 must be deposited into special Treasury accounts, with some exceptions. Further, to the extent provided in appropriations acts, at least half of the proceeds deposited into these special Treasury accounts must be returned to the installation where the proceeds were derived. Most recently, the National Defense Authorization Act for Fiscal Year 2008 further refined this leasing authority in several ways; for example, provision or payment of utility services was designated as an acceptable in-kind service, while facility operation support for the secretary concerned was eliminated as an acceptable form of consideration. Leases executed pursuant to Section 2667 not only benefit the installation by leveraging underutilized land in exchange for rent money or in-kind consideration, such as new construction or maintenance of existing facilities, they also benefit the developer and the community. For example, according to DOD officials, these projects can establish long-term relationships between developers and private sector and government entities with specific real estate needs that are potential occupants of the space. In addition, developers receive market rate returns on their investments and access to new markets, such as federal government and military support contractors. These agreements benefit the community by providing additional jobs, a broader tax base, and renovation of deteriorated assets. Another frequently used authority, Section 2681 of Title 10, authorizes the Secretary of Defense to enter into contracts with commercial entities that desire to conduct commercial test and evaluation activities at a major range and test facility installation. Such contracts must contain various provisions pertaining to the Secretary’s ability to terminate, prohibit, or suspend certain tests under the contracts, as well as requirements pertaining to the contract price. Section 2681 also contains rules on the retention of funds. Further, the Secretary of Defense is required to issue regulations to carry out this provision. Under Section 2878 of Title 10, the secretary concerned may convey or lease property or facilities to eligible entities for the purpose of using the proceeds to carry out activities under the Military Housing Privatization Initiative (MHPI). This authority cannot be used to convey or lease property or facilities located on or near military installations approved for closure under a base closure law. The conveyance or lease of property or facilities under this section must be for such terms and conditions as the secretary concerned considers appropriate for MHPI purposes while protecting the interests of the United States. As part or all of the consideration for a conveyance or lease under this section, the purchaser, or lessor, shall enter into an agreement with the secretary to ensure that a preference will be given to members of the armed forces and their dependents in the lease or sublease for a reasonable number of the housing units covered by the conveyance or lease. Property leased or conveyed using this authority is exempt from certain property management laws. Another authority, Section 2869 of Title 10, allows the secretary concerned to enter into an agreement to convey real property (including any improvements) under the secretary’s jurisdiction that is located on a military installation that is either closed or realigned under a base closure law or located on an installation not closed or realigned under base closure law and determined to be excess to DOD needs. Such a conveyance may be made only to a person who agrees, in exchange for the real property, (1) to carry out a military construction project or land acquisition, including the acquisition of all right, title, and interest or a lesser interest in real property under an agreement entered into under section 2684a of Title 10 to limit encroachments and other constraints on military training, testing, and operations, or (2) to transfer to the secretary concerned housing that is constructed or provided by the person and located at or near a military installation at which there is a shortage of suitable military family housing, military unaccompanied housing, or both. There are various rules and conditions regarding the use of this authority, including a requirement that advance notice be provided to Congress before use, certain limits on the deposit and use of funds, and annual reporting requirements to Congress. Beyond the various real property authorities that may be utilized by DOD under certain circumstances, a framework of legal requirements and restrictions must be complied with in DOD’s use of its land, buildings, and facilities, many of which relate to environmental and cultural preservation. For example, DOD guidance requires that all proposed outleasing actions (regardless of grantee or consideration) be subject to the appropriate level of analysis required by the National Environmental Policy Act of 1969 and its implementing regulations. Further, the National Historic Preservation Act lays out the responsibilities of federal agencies related to certain cultural resources under their stewardship and authorizes the expansion and maintenance of a National Register of Historic Places composed of districts, sites, buildings, structures, and objects significant in the history, architecture, archeology, engineering, and culture of the United States and worthy of preservation, among other things. Although many land use planning authorities currently exist that permit the Secretary of Defense, the secretaries of the military departments, or both to help make more efficient use of real property under their jurisdiction or control under various circumstances, our analysis of service data showed that Section 2667 of Title 10 is most frequently used for both traditional leases as well as longer-term, more financially complex enhanced use leases. We further found that the second most frequently used authority is Section 2681 of Title 10, though this authority was most frequently used with respect to Army real property with about 86 percent of its reported usage during fiscal years 2005 through 2007. There are many other land use planning authorities that the Secretary of Defense, the secretaries of the military departments, or both may use under certain circumstances to better utilize existing real property under their control. Our analysis indicates that there are more than 30 available authorities of general and permanent applicability in the U.S. Code available to the Secretary of Defense, the secretaries of the military departments, or both pertaining to the utilization of existing real property controlled by DOD. The services reported that these other authorities have not been used as frequently as Section 2667 and Section 2681 of Title 10. In addition to these codified authorities of general and permanent applicability, special legislation is often enacted that grants authority to or requires the Secretary of Defense, the secretary of a military department, or both to execute particular land use activities at specific installations or parcels of land controlled by DOD. The services reported using Section 2667 of Title 10 a total of 744 times during fiscal years 2005 through 2007 for both traditional leases as well as longer-term, more financially complex enhanced use leases. Table 1 shows the breakdown by the reported use of Section 2667 of Title 10 according to the service at which the real property was located during fiscal years 2005 through 2007. The majority of agreements that have been executed under Section 2667 of Title 10 over the past 3 years are traditional non-enhanced use lease agreements. During fiscal year 2007, the services reported that 222 new agreements were signed under Section 2667 of Title 10 and earned approximately $51 million in revenue. For example, at Fort Meade, Maryland, installation officials provided data showing that the installation will receive $5,600 monthly through November 2010 on two 5-year cellular phone tower leases and at Camp Pendleton, California, the Marine Corps earned over $1 million from two agricultural leases during fiscal year 2007. Using Section 2667 of Title 10, the Army and Air Force reported earning combined totals of approximately $14 million in fiscal year 2005 and $22 million in fiscal year 2006. Under this same authority, the services also reported that more financially complex, longer-term enhanced use leases were executed. These leases are usually for a term of greater than 30 years and payment is typically in in-kind services, such as new construction or maintenance and repair, rather than cash. According to the Army’s draft Enhanced Use Leasing Handbook, the longer lease terms are more in line with private real estate development standards, and therefore help satisfy financial lending requirements and help make the development worthwhile to all enhanced use lease project stakeholders. During fiscal years 2005 through 2007, Army officials reported that 10 of these enhanced use leases were signed, and Air Force officials reported that 4 were signed. These leases are projected by the services to be worth more than $1.1 billion over the life of the leases, with the Army estimating the bulk of the projected revenue. For example, the Army reported that a lease was signed with a motor vehicle company to provide land for it to install a hot weather vehicle test track at Yuma Proving Ground, Arizona. The track will be available for the Army’s use for testing its vehicles, and the Army will obtain additional compensation to allow it to install an additional test track at a total net present value estimated at $26.8 million over the 50-year life of the lease. In addition, at Nellis Air Force Base, Nevada, Air Force officials reported that land was provided through a public-private partnership to install an electricity generating photo voltaic array whose net present value is estimated at $10.9 million for electricity that will be provided to the installation over the 20-year life of the lease. Furthermore, service officials reported that several more enhanced use leases are in process – 24 for the Army, 33 for the Air Force, and 14 for the Navy. For example, the Army is trying to lease land owned by Fort Meade, Maryland, to a contractor who will build a new office complex. This 50- year lease project is expected to provide office space for military and security-related defense contractor jobs coming to the area as a result of the 2005 BRAC round. The contractor is expected to move a golf course at the interior of the fort to the exterior of the fort to make room on the old golf course for BRAC and National Security Agency-related construction. The Air Force is negotiating for a 50-year ground lease of 180 acres of land along the western perimeter of Hill Air Force Base, Utah. Air Force officials told us that the lessee will construct an approximately 2.8 million square foot office park consisting of commercial office, retail, hotel, and restaurant space on the 180 acres of leased Air Force land. At least 600,000 square feet of the development will become Air Force owned and maintained office space, and Air Force officials expect to receive additional in-kind compensation over the life of the lease to be used for additional Air Force projects and maintenance. At the end of the lease period, the land and all improvements on both of the projects described above will revert back to the applicable service. At the time of our review, the Navy was considering an enhanced use lease of the former Portsmouth Naval Prison at Portsmouth Naval Shipyard in Kittery, Maine, for not more than 50 years. Marine Corps officials told us that an enhanced use lease has not yet been executed with regard to Marine Corps land, but that several potential projects are being considered. Service officials reported that Section 2681 of Title 10 was used 601 times during fiscal years 2005 through 2007, and about 86 percent of its use was with respect to Army major range and test facility installations. This authority was also used with respect to Navy and Air Force installations during this period but much less frequently than for the Army. This authority was not used with respect to Marine Corps real property during this period. Table 2 shows the breakdown of the reported use of 2681 of Title 10 during fiscal years 2005 through 2007 according to the service at which the installation was located. The authority was used on Army installations to allow defense contractors to test major weapons systems under development for the Army and the other services. This authority was used on Navy installations for several projects, including allowing an aviation company to evaluate noise reduction technology of a static engine. The authority was also used at an Air Force facility to allow a major automobile manufacturing company to test automobile antennas for radio frequency emissions. Our analysis shows that there are more than 30 authorities of general and permanent applicability in the U.S. Code available to the Secretary of Defense, the secretaries of the military departments, or both pertaining to the utilization of existing DOD real property, such as the authority to outlease, grant easement upon, permit special use of, or convey real property. Many of these authorities may only be used under various specified circumstances and contain unique requirements or limitations. For example, while Section 2878 of Title 10 gives the secretary concerned the authority to convey or lease certain DOD real property to an eligible entity, this authority may only be used for the specific purpose of using the proceeds to carry out activities under MHPI and contains limitations, including the kind of real property leased or conveyed and certain requirements for consideration. Service officials indicated that while some of these other authorities were utilized with regard to their respective real property during fiscal years 2005 through 2007, they have been used much less often than Section 2667 and Section 2681 of Title 10. For example, the services reported the authority in Section 2878 of Title 10 was used 53 times during fiscal years 2005 through 2007. Table 3 shows examples of authorities other than Section 2667 and Section 2681 of Title 10 that the services reported using with respect to real property under their control over the 3-year period. The services reported that Section 2869 of Title 10 was used only two times during fiscal years 2005 through 2007. DOD reported that in 2005, the Secretary of the Army signed an exchange agreement with a private developer, trading the 16.29-acre Bellmore, New York, property—closed during the 1995 BRAC process—for the construction of a covered fuel truck storage facility at Fort Drum, New York, and an additional $6.65 million in cash. DOD also reported that in 2006, 13 acres of Army land at Devens Reserve Forces Training Area, Massachusetts, were transferred to the Massachusetts Development Finance Agency in exchange for over $1 million in renovations to buildings and land at the same installation. Air Force officials stated that Section 2869 of Title 10 is currently being used to exchange land, previously used by the Defense Logistics Agency as a fuel supply depot, for military construction at March Air Reserve Base, California. In addition to land use authorities of general and permanent applicability in the U.S. Code, special legislation pertaining to specific land use activities at particular installations or parcels of land is also regularly enacted. For example, the John Warner National Defense Authorization Act for Fiscal Year 2007 contained a provision prohibiting the Secretary of Defense and the Secretary of the Navy from entering into an agreement (or authorizing any other person to enter into an agreement) that would either (1) authorize civil aircraft to regularly use an airfield or any other property or (2) convey any real property at the installation for the purpose of permitting the use of the property by civil aircraft, at four Navy and Marine Corps bases in California—Naval Air Station North Island, Marine Corps Air Station Miramar, Marine Corps Air Station Camp Pendleton, and Marine Corps Base Camp Pendleton. Most of the nearly 50 pieces of special legislation included in the National Defense Authorization Acts for Fiscal Years 2005, 2006, and 2007 pertained to land conveyances or exchanges at specific bases or installations. For example, Section 2851 of the National Defense Authorization Act for Fiscal Year 2006 authorized the Secretary of the Navy to convey to the County of San Diego, California, approximately 230 acres along the eastern boundary of Marine Corps Air Station, Miramar, California, for the purpose of removing the property from the boundaries of the installation and permitting the county to preserve the entire property as a public park and recreational area known as the Stowe Trail. The legislation contained several terms and conditions on its use, such as a requirement to provide written notice to Congress related to its use. Land, buildings, and facilities on DOD installations may appear underutilized or not utilized but are nonetheless unavailable for other uses for several reasons. Restrictions and constraints on DOD’s use of lands under its control include setbacks for antiterrorism protection, mission requirements, necessary safety zones, and environmental considerations. In addition to underutilized land, buildings and facilities on DOD installations may appear underutilized or not utilized because of historical considerations, the need to make room for incoming personnel, or the need for repair or demolition funding. Antiterrorism requirements place constraints on the use of land. For example, antiterrorism concerns require standoff distances for inhabited buildings from the controlled perimeter of the base and from other adjacent buildings, parking areas, and trash containers, to minimize the extent of injury or death to occupants in the event of a terrorist incident. For example, officials at Marine Corps Base Camp Pendleton, California, told us that unutilized land between existing buildings could not be used to construct new buildings because of antiterrorism constraints and requirements. Installation mission needs, including the need for open space to fulfill training requirements, also cause restrictions on the use of land. Maneuver training lands and ranges are strictly controlled areas that do not mix well with other land uses. For example, officials at Marine Corps Base Camp Pendleton stated that undeveloped land on the coast is the only space available to the Marines on the West Coast for amphibious assault training. Similarly, at Fort Sam Houston, Texas, a curving strip of land on the western side of the base, approximately 1 mile long and 800 feet wide, serves as a combination parade, drill, and training ground for the units headquartered along its length. In addition, safety requirements, which necessitate that land be kept clear to perform the installation’s mission, can place additional restrictions on the use of land. For instance, installations that have active runways require clear zones and accident potential zones that place constraints on land use because of air operations. These constraints include restrictions on development requiring a minimum separation distance from airfield pavements and height limitations on buildings. Structures that violate these criteria are generally not permitted to be built without a waiver. Randolph Air Force Base, Texas, for example, has clear zones and accident potential zones that extend off both ends of its dual parallel runways into the adjacent communities. These communities, base officials told us, have cooperated with the Air Force to limit development within the accident potential zones. Also, for safety reasons, live fire ranges and munitions storage bunkers require clear zones. Facilities are usually not sited within munitions clear zones unless they are part of the munitions operations. Various environmental restrictions and constraints, which can affect the location of new facilities and even mission operations, place additional limits on land use. These restrictions and constraints can be caused by the presence of threatened or endangered species; critical habitats, such as seasonal breeding grounds, flood plains, wetlands, and sensitive plant communities; and the existence of hazardous materials. Further, a framework of legal requirements and restrictions must be complied with in DOD’s use of its land use planning authorities. For example, DOD guidance requires that all proposed outleasing actions (regardless of grantee or consideration) be subject to the appropriate level of analysis required by the National Environmental Policy Act of 1969 and its implementing regulations. Installations use various management tools, such as integrated natural resource management plans, to integrate their military missions and natural resources conservation. The construction of new facilities can damage critical habitats, and mission-related noise and light can affect the ability of some endangered species to successfully breed. For example, Navy officials told us that Naval Air Station North Island, California, an installation of Naval Base Coronado, has a vacant parcel of land that remains undeveloped because it is the nesting area for an endangered bird, the California least tern. As shown in figures 1 and 2, the nesting area borders maintenance facilities and is adjacent to the control tower. A base official told us that an attempt to transplant the nesting area to a more suitable location on the installation could take 5 years, if it is successful at all. Additionally, at Naval Base San Diego, a reclamation project on the largest parcel of open usable land on the base is removing the top 2 feet of soil from the location and disposing of the contaminated soil. Base officials told us that the reclaimed land will house the base transportation office and a Defense Logistics Agency facility. The historical significance of buildings and structures may contribute to buildings being underutilized or not utilized. Installations work with state- designated state historic preservation officers and their representatives to determine the cultural impact that actions such as construction, renovation, or demolition might have on a historic building. Because of the expense of meeting requirements for historic buildings, installation officials indicated that it often costs less to demolish a building and construct a new one than to renovate an existing historic building for reuse with a new or different mission. In fiscal year 2007, DOD reported more than 2,200 buildings as historically significant and more than 7,500 buildings eligible for historic designation. For example, Army officials stated that Fort Sam Houston has over 800 historical buildings, many of which are located in a designated national historic district. One group of these buildings, the Long Barracks, on the periphery of the historic district, consists of 11 buildings that have been largely unutilized for over 15 years. (See fig. 3.) One of these unutilized buildings is a 1,000-foot long, two-story former barracks listed as a contributing element to a national historic district. A base official told us that the prolonged nonutilization is both because of the Long Barracks’ inclusion in a national historic district and because the associated buildings require extensive, costly renovations. In some cases, the services reported that the enhanced use leasing and housing privatization authorities have been used to creatively maintain and renovate historic buildings. For example, the old Brooke Army Medical Center at Fort Sam Houston went unutilized after the new Brooke Army Medical Center opened. Army officials stated that an enhanced use lease was negotiated with developers whereby the old Brooke Army Medical Center was renovated into usable office space that is currently fully leased to various Army tenants. Similarly, Air Force officials stated that Section 2878 of Title 10 was used on Randolph Air Force Base to successfully renovate, repair, and maintain 297 housing units designated as contributing elements to the national historic district located on the base. Incoming and outgoing or reduced missions, units, or personnel can leave portions of buildings and structures temporarily underutilized or not utilized while the transition occurs. A building or facility may require renovation to accommodate incoming or changing missions. For example, officials at Naval Base Point Loma, California, described two buildings currently not utilized. The first, an empty warehouse, is under consideration to house the Navy Band, currently located at Naval Base Coronado. If this plan is approved, the warehouse would have to be modified to fit the Navy Band’s mission requirements before the relocation could occur. The second unutilized building is a barracks that has been laid up, or mothballed, because of the reduced number of personnel on the base. In addition, at Naval Base San Diego, units have been consolidated into one building so that another building may be renovated prior to the arrival of a new shipping platform at the base. The Navy will be unable to utilize this building during the renovation. In addition, property may be classified as not utilized when a service is waiting for funding for repairs or demolition. For example, Lackland Air Force Base has a 48-unit visiting officers’ quarters and a student dormitory, both of which are unused because of the presence of mold. The Air Force has sought funding both to demolish the visiting officers’ quarters and to repair the student dormitory; meanwhile, both of these facilities remain not utilized. In addition, officials at Naval Base San Diego told us that a condemned maintenance repair building is occupied by tenants on the first floor only. The second and third floors have been condemned because of structural conditions and remain unoccupied while the building awaits demolition. The services use similar policies and procedures for responding to requests for space on an installation by other federal agencies and by organizations within DOD. DOD guidance requires the military departments to maintain a program monitoring the use of real property to ensure that all real property holdings under their control are being used to the maximum extent possible consistent with both peacetime and mobilization requirements, and establishes priorities that the military departments must use when assigning available space on their respective installations. DOD guidance also provides that DOD activities should provide requested support to other DOD activities when support can be provided without jeopardizing the mission of the installation. Further, the secretaries of the military departments have established programs and procedures to manage their real property, which encourage such space sharing. For example, a Navy instruction states that the outleasing of any underutilized real property that is judged necessary for mobilization/surge capacity to both ensure that the property is maintained and generate revenue for the installation should be pursued, and that in land planning, decision makers be presented with alternatives that analyze and develop recommendations for mutual land and facilities use with other DOD entities; federal, state, and local governments; and private entities, where appropriate. An Army regulation states that when real property is underutilized, not used, or not put to optimum use but required to support DOD missions, the garrison commander should consider allowing its interim use by other federal agencies, state and local governments, or the private sector, among other things. Finally, Air Force policy states that Air Force property should be made available for use by others as much as possible and that priority be given to other military departments and federal agencies over private organizations. Department-specific policies govern the procedures for allowing the use of space by other federal agencies, including both DOD and non-DOD tenants. In general, department officials told us that requests are received at the installation level and must include information on the requester’s facilities and land requirements, justification for selecting the proposed installation, and a statement of environmental impact. After a request is received, it is reviewed by the installation. The process for reviewing these requests varies by installation. For example, officials at Camp Pendleton told us that at their installation the request is reviewed by the facilities directorate and any affected base activities. The facilities directorate and affected activities make a presentation to the base commander with their recommendations on the request. Navy Region Southwest has a Regional Space Allocation Committee that reviews all requests for space at Naval Base Point Loma, Naval Base Coronado, and Naval Base San Diego. The committee, with input from the base commanders, meets on an as-needed basis and reviews all requests and then makes recommendations to the Commander, Navy Region Southwest. Final approval authority varies by military department and is specified in department guidance. In accordance with a Secretary of the Navy Instruction, requests for space at Navy installations must be approved by the regional commander and Commander, Navy Installations Command, and requests for space at a Marine Corps installation are approved by either the installation commander/commanding officer and Commandant of the Marine Corps for Marine Corps property, while licenses of 1 year or less may be approved by the regional commander for Navy property or by the commander/commanding officer for Marine Corps property. An Air Force handbook states that the Secretary of the Air Force, under administrative powers, may authorize other federal government agencies, DOD agencies, or military departments to use Air Force real property by permit. An Army regulation states that approval of requests for space by other federal agencies will be made by Headquarters, Department of the Army. We visited installations from each service and found that each installation we visited had multiple DOD and non-DOD federal tenants. For example, the Environmental Protection Agency, the Architect of the Capitol, and the National Guard use space at Fort Meade in Maryland. Installations in Navy Region Southwest are home to groups from the Coast Guard, the Army, the Air Force, the Department of the Interior, and the Department of Transportation. Finally, Hill Air Force Base, Utah, has several DOD tenants, including the Army Corps of Engineers and the Defense Logistics Agency, as well as non-DOD federal tenants, such as the Federal Aviation Administration and the Forest Service. We requested comments from DOD, but none were provided. We are sending copies of this report to the Secretary of Defense and to interested congressional committees. We will make copies available to others upon request. This report will also be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-4523 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. To determine how the Department of Defense (DOD) has used its land use planning authorities, we researched and developed a comprehensive list of many of the most relevant authorities in the U.S. Code that could potentially be utilized by the Secretary of Defense, the secretaries of the military departments, or both. After speaking with DOD and service officials about the authorities that they used most often, we provided a written request to each service inquiring which of a select list of authorities they used and what kind and amount of overall compensation they obtained from using these authorities during fiscal years 2005 through 2007. We also asked the services, in writing, about special land use planning legislation available to them during these same fiscal years. Service headquarters’ officials provided this information to us. Specifically, we spoke with officials from the Air Force Real Property Agency, the Marine Corps’ Land Use and Military Construction Branch, the Office of the Assistant Secretary of the Army for Installations and Environment, and the Office of the Assistant Secretary of the Navy for Installations and Environment. We cross referenced data where appropriate. Specifically, in our count of the number of pieces of special legislation pertaining to land use planning in the National Defense Authorization Acts for Fiscal Years 2005, 2006 and 2007, we included both new and modified authorities available to the Secretary of Defense or secretaries of the military departments pertaining to the utilization of a specific piece of real property, such as the authority to outlease, convey, or transfer that property, as well as requirements that the applicable secretary use a specific piece of real property in a particular manner. We did not include, for example, statements regarding the sense of congress with respect to land planning, or reports required regarding land planning. We analyzed their responses and followed up with questions on any areas of ambiguity. We visited selected installations and interviewed installation officials about their land use activities, discussed both traditional leases and enhanced use leases with them, and obtained documentation on specific leases, their terms, and compensation. We selected 10 installations to visit based on size; proximity to other installations; and past, current, or future planned large real estate projects, such as enhanced use leases or conveyances. Table 4 lists the installations that we visited, by service. We also gathered additional information on each service’s enhanced use lease program and analyzed data we obtained on existing leases and on those that are currently under consideration. To determine the reasons why land, buildings, and facilities on DOD installations may appear underutilized or not utilized, we reviewed DOD and service guidance relevant to land use planning. We interviewed service officials to identify the available uses for land, buildings, and facilities that may be underutilized or not utilized yet still be unavailable for development or other use. We visited selected installations and interviewed installation officials about the restrictions and constraints placed on the utilization of land, buildings, and facilities. We also reviewed documentation from the installations relevant to land use planning and restrictions and constraints on the use of their lands, buildings, and facilities. To determine the policies and procedures used by the services to respond to requests by other federal agencies for space at a DOD installation, we reviewed relevant DOD and service guidance. We also visited selected installations and interviewed installation officials about how they respond to requests for space by other federal agencies. We reviewed documentation from selected installations on the agreements that they currently have with other federal agencies. We conducted this performance audit from September 2007 to July 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individual named above, Harold Reich, Assistant Director; William Bates; Scott Behen; Leslie Bharadwaja; Joanne Landesman; Katherine Lenane; Richard Meeks; and Charles Perdue made key contributions to this report.
The Department of Defense (DOD) is one of the largest landholding agencies in the federal government with more than 577,500 facilities at 5,300 sites on over 32 million acres. GAO has previously reported that the management of DOD-held real property is a high-risk area, in part because of deteriorating facilities and problems with excess and underutilized property. To address these problems, DOD has developed a multipart strategy involving base realignment and closure, housing privatization, and demolition of facilities that are no longer needed. DOD is also leasing out underutilized real property to gain resources to repair or construct facilities. The House Armed Services Committee Report on the National Defense Authorization Act for Fiscal Year 2008 directed the Comptroller General to provide an analysis of DOD's use of its land use planning authorities. Specifically, GAO examined (1) how DOD has used its authorities; (2) the reasons why land, buildings, and facilities on DOD installations may appear to be underutilized or not utilized; and (3) the policies and procedures used by the services to respond to requests by other federal agencies for space at a DOD installation. GAO reviewed pertinent legislation and DOD and service policies, interviewed officials from DOD and all four services, and visited 10 installations from all four services. Although many land use planning authorities currently exist that permit the Secretary of Defense, the secretaries of the military departments, or both to help make more efficient use of real property under their control, Section 2667 of Title 10, U.S. Code, leasing of nonexcess property of military departments, was used the most frequently--744 times from fiscal years 2005 through 2007. Under Section 2667 of Title 10, traditional short-term lease agreements are typically executed, but more financially complex, longer-term enhanced use leases are also executed. Section 2681 of Title 10, the authority to enter into contracts with commercial entities that desire to conduct commercial test and evaluation activities at a major range and test facility installation, was also used frequently, with 601 uses during fiscal years 2005 through 2007. GAO's analysis indicates that there are more than 30 authorities in the U.S. Code pertaining to DOD's utilization of real property. Service officials indicated that they have used these other authorities much less often and only for a limited number of leases or other transactions. Land, buildings, and facilities on DOD installations may appear to be underutilized or not utilized for several reasons. For example, land that appears empty or underutilized often has a variety of restrictions and constraints placed upon its use, including setbacks for antiterrorism protection, mission requirements, safety zones, and environmental concerns. The services identified several reasons why buildings and facilities might be classified as underutilized or not utilized but still remain unavailable for other uses, including historical considerations. Each of the military departments has similar policies and procedures in place for responding to requests for space on an installation from other federal agencies. Service officials told us that requests for space are submitted directly to the installation and should include information on facilities and land requirements, justification for selecting the proposed installation, and a statement of environmental impact. An official request for space is reviewed at the installation level, and the installation commander makes a recommendation to the approving official, although the approving official differs depending on the service and the nature of the request.
Signed into law on May 9, 2014, the DATA Act expanded on previous federal transparency legislation to link federal agency spending to federal program activities so that taxpayers and policymakers can more effectively track federal spending. The DATA Act requires government- wide reporting on a greater variety of federal funds as well as tracking of these funds at multiple points in the federal spending lifecycle. The act also calls for the federal government to set government-wide data standards, identify ways to reduce reporting burdens for grantees and contractors (Section 5 Pilot), and regularly review data quality to help improve the transparency and accountability of federal spending data. OMB and Treasury have taken significant steps toward implementing the act’s various requirements including standardizing data element definitions, issuing guidance to help agencies develop their implementation plans, and designing a pilot for developing recommendations to reduce recipient reporting burden. We have previously reported on these efforts and others and have identified a number of ongoing challenges that will need to be addressed in order to successfully meet the act’s requirements. Throughout our ongoing oversight of OMB’s and Treasury’s efforts to implement the act, we have coordinated closely with OMB and Treasury to provide timely feedback and have made a number of recommendations that, if addressed, could help ensure the full and effective implementation of the act. OMB and Treasury have made progress implementing 5 of our recommendations related to DATA Act implementation. However, additional effort is needed to address 11 previous GAO recommendations that remain open. See appendix II for a list of our previous recommendations relating to the DATA Act and their implementation status. OMB and Treasury are developing a governance structure, but more work will be needed to ensure that this structure is consistent with key practices for developing and maintaining the integrity of data standards over time. In July 2015, we reported that OMB and Treasury took initial steps to develop organizational structures for project governance but had not yet established a formal framework for providing data governance throughout the lifecycle of developing and implementing standards. Such a framework is key for ensuring that the integrity of data standards is maintained over time. Accordingly, we recommended that OMB and Treasury establish a clear set of policies and procedures for developing and maintaining data standards that are consistent with leading practices. OMB and Treasury generally agreed with our recommendation and, in response, engaged a contractor to interview key stakeholders and develop a set of potential next steps. The first of these steps was to establish a new Data Standards Committee that will be responsible for maintaining established standards and developing new data elements or data definitions that could affect more than one functional community (e.g., financial management, financial assistance, and procurement). According to OMB staff, the Data Standards Committee held its inaugural meeting on September 15, 2016, and will meet on a monthly basis. The committee has also drafted a charter that will delineate the scope of the committee’s work, as well as the composition and responsibilities of its members. According to OMB staff, members include representatives from a range of federal communities including the grants, procurement, financial management and human resources communities, as well as representatives of several interagency councils including the Chief Information Officers Council and the Performance Improvement Council. OMB staff told us that the committee will focus on clarifying existing data standard definitions, including the definition of predominant place of performance, and identifying new standards that may be needed going forward. In October 2016, according to OMB staff, the charter was under review by the DATA Act Executive Steering Committee. Several data governance models exist that could inform OMB’s and Treasury’s efforts to ensure the integrity of the data standards over time. These models define data governance as an institutionalized system of decision rights and accountabilities for planning, overseeing, and controlling data management. Many of these models promote a common set of key practices that include establishing clear policies and procedures for developing, managing, and enforcing data standards. A common set of key practices endorsed by standards setting organizations recommend that data governance structures should include the key practices shown in the text box below. We have shared these key practices with OMB and Treasury. Key Practices for Data Governance Structures i. Developing and approving data standards. ii. Managing, controlling, monitoring, and enforcing consistent application of data standards. iii. Making decisions about changes to existing data standards and resolving conflicts related to the application of data standards. iv. Obtaining input from stakeholders and involving them in key decisions, as appropriate. v. Delineating roles and responsibilities for decision-making and accountability, including roles and responsibilities for stakeholder input on key decisions. OMB and Treasury have not yet institutionalized and clearly documented policies and procedures that are consistent with these key practices. For example, processes have not been developed to both approve new standards and ensure that already established standards are consistently applied and enforced across the federal government. One reason why having a robust, institutionalized data governance structure is important is to provide consistent data management during times of change and transition. The transition to a new administration presents one such situation. We have previously reported that, given the importance of continuity when implementing complex, government-wide initiatives, the potential for gaps in leadership that can occur as administrations change can impact the effectiveness and efficiency of such efforts, potentially resulting in delays and missed deadlines. Such transitions may disrupt the momentum for meeting implementation timeframes or cause the government to fail to continue to build on previous accomplishments. The absence of a robust and institutionalized data governance structure presents additional potential risks regarding the integrity of data standards over time and the ability of agencies to meet their statutory timelines in the event that priorities shift with the incoming administration or momentum is lost. In June 2016, OMB directed the 24 CFO Act agencies to update their initial DATA Act implementation plans that they submitted in response to OMB’s May 2015 request. Each agency was to (1) update its timeline and milestones and explain the agency’s progress to date and the remaining actions it would take to implement the act in accordance with the suggested steps in Treasury’s DATA Act Implementation Playbook (Version 2.0) (Playbook 2.0), (2) report costs to date and estimated total future costs, and (3) explain any new challenges and mitigation strategies. In reviewing the 24 CFO Act agencies’ implementation plan updates that we obtained from the agencies, we found the following: Each of the 24 CFO Act agencies’ updates included timelines and milestones and most of the updates included most of the OMB required information. For example, most of the 24 CFO Act agencies included remaining actions the agencies would take to implement the suggested steps in Playbook 2.0. Some of the CFO Act agencies did not include information about some of the remaining actions to implement the suggested steps in Playbook 2.0. For example, 5 of the 24 CFO Act agencies did not include information about testing for completeness and accuracy of data elements submitted to Treasury, 11 CFO Act agencies did not include information about workflows for addressing validation errors and revisions needed to agency data submissions, and 13 CFO Act agencies did not include information about testing linkages of program and financial data or possible interim solutions to link such data, if needed. Without such information in agencies’ updates, it may be more difficult for OMB and Treasury to determine where to target their monitoring and assistance efforts to help ensure the DATA Act is successfully implemented. Our review of the CFO Act agencies’ August 2016 implementation plan updates found that 21 of the 24 CFO Act agencies reported costs to date and future estimated costs to implement the DATA Act reporting requirements. One agency reported future estimated costs, but did not report costs to date. Two agencies did not provide any cost estimates. Total cumulative and future estimated costs for full DATA Act implementation that was reported by 22 CFO Act agencies in their implementation plan updates ranged from approximately $1.0 million to $59.1 million, for a total of about $202.4 million. This total estimated cost reported by CFO Act agencies to implement the DATA Act includes costs for systems integration and modernization. It is important to note that the estimated total costs reported by CFO Act agencies to implement the DATA Act requirements is relatively small when compared to the almost $81 billion spent on information technology by the CFO Act agencies in fiscal year 2016 alone. See appendix III for more details about the information that OMB required CFO Act agencies to include in their implementation plan updates, remaining actions to implement the suggested steps in Playbook 2.0, and the number of CFO Act agencies that included the information. In our July 2016 report, we reported on challenges agencies included in their initial implementation plans. The implementation plan updates indicate that 19 of the 24 CFO Act agencies continue to face challenges in their efforts to implement the DATA Act. Based on our review of the 24 CFO Act agency implementation plan updates, we identified four overarching categories of challenges reported by agencies that may impede their ability to effectively and efficiently implement the DATA Act: systems integration issues, lack of resources, evolving and complex reporting requirements, and inadequate guidance. See table 4 in appendix III, which describes the categories of challenges and the number of CFO Act agencies reporting challenges in each category. Some of the challenges reported by the CFO Act agencies in their updates include the following: Systems integration. Nineteen of the 24 CFO Act agencies reported challenges related to systems integration, which include concerns with systems limitations, modernization efforts, and timing. For example, one agency reported that validation presents challenges because its financial systems are not properly integrated with procurement and grant systems. Similarly, the agency reported that several of its components are undergoing grant, procurement, or financial system improvements that coincide with implementing the DATA Act, which could pose a risk to timely DATA Act implementation if the improvements are delayed. Another agency reported that, for one of its legacy systems, obtaining the unique identifier to generate award financial data will likely be a manual process. The lack of properly integrated systems increases the risk that agencies may have difficulty compiling and validating the information they are required to report under the DATA Act by the May 2017 reporting deadline for agencies to submit their financial and payment information. Resources. Fourteen of the 24 CFO Act agencies reported challenges related to staffing issues or funding constraints. For example, one agency reported that expertise related to feeder systems and data will be needed to successfully implement the DATA Act, but such subject matter experts may not be available. Another agency reported that meeting the reporting deadline is highly dependent on receiving requisite funding and resources. The lack of sufficient resources, including staff expertise and proper funding, increases the risk that agencies may have difficulty taking all the actions needed in a timely manner to fully implement the requirements of the DATA Act. Reporting. Thirteen of the 24 CFO Act agencies reported challenges related to mandatory DATA Act reporting requirements, including concerns with data quality and their ability to report all the required data elements in their initial DATA Act submissions, as well as senior accountable officials (SAO) certification and reporting non-financial data. For example, one agency reported that its SAO may be unable to certify the quality of data if OMB’s guidance for the SAO certification cannot be supported by existing processes. Another agency reported concerns with the burden of reconciling account data with financial and award data. In addition, two agencies reported challenges with reporting beginning balances at the level of detail required by the DATA Act. Two agencies reported concerns with protecting sensitive and classified data. One agency also reported ongoing issues with inconsistent quality of data submitted from their financial systems. Another agency reported that certain data elements are not currently available for all document types, and is considering pulling these data elements from other source systems to the extent possible. A lack of complete and accurate agency data increases the risk that agencies may not be able to meet the DATA Act reporting requirements within the mandated timeframes. Guidance. Eleven of the 24 CFO Act agencies reported ongoing challenges related to the timely issuance of, and ongoing changes to, OMB policy and Treasury guidance. Eight agencies reported that if policy or technical guidance continues to evolve or be delayed, the agencies’ ability to comply with the May 2017 reporting deadline could be affected. Some agencies also reported concerns about the requirement for SAOs to certify the data reported quarterly. For example, one agency reported that if guidance clarifying certification procedures is delayed, it may not have time to implement appropriate validation steps needed to give assurance over the data. Because of the lack of timely and consistent guidance, agencies may need to continuously update or change their processes, which could adversely affect their ability to meet the DATA Act requirements. As noted above, the information reported by the CFO Act agencies in their implementation plan updates indicates that some agencies are at increased risk of not meeting the May 2017 reporting deadline because of these challenges. In addition, inspectors general for some agencies, such as the Departments of Labor and Housing and Urban Development, have issued readiness review reports that also indicate their respective agencies are at risk of not meeting the reporting deadline. As discussed further below, the technical software requirements for agency reporting are still evolving, so any changes to the technical requirements over the next few months could also affect agencies’ abilities to meet the reporting deadline. In August 2016, in response to our prior recommendation, OMB established procedures for reviewing and using agency implementation plan updates that include procedures for identifying ongoing challenges. In its procedures document, OMB states that it has received input from a significant number of agency staff via office hours, emails, regular meetings, agency visits, and other methods regarding the challenges agencies are experiencing as they work toward implementation since the submission of their original plans. OMB’s document also states that it has worked to address these challenges and provide both policy and technical guidance as needed. Further, the document stated that requiring agencies to update plans will allow OMB to address challenges that agencies are not directly reaching out to OMB about or that numerous agencies are experiencing. According to the procedures document, OMB will also be monitoring progress toward the statutory deadline and setting up meetings with any of the 24 CFO Act agencies that OMB identifies as being at risk of not meeting the implementation deadline. OMB will schedule these visits by reviewing the implementation plan updates and discerning which agencies appear to be experiencing the most challenges to implementation. To help address their challenges, 16 of the 24 CFO Act agencies reported that they use certain mitigating strategies in their implementation plan updates. Based on our review, we identified seven overarching categories of mitigating strategies reported by these agencies to address DATA Act implementation challenges: making changes to internal policies and procedures, leveraging existing resources, using external resources, continuing communications, employing manual and temporary workarounds, monitoring and developing guidance, and enhancing existing systems. These strategies, as a whole, were similar to the mitigating strategies reported by agencies in their initial implementation plans. The most commonly reported categories of mitigating strategies were changing internal policies and procedures and leveraging existing resources. See table 5 in appendix III for descriptions of the categories of mitigating strategies and the number of CFO Act agencies that report using strategies from each category. In May 2016, in response to our prior recommendation, OMB released additional guidance on reporting financial and award information required under the act to address potential clarity, consistency, and quality issues with the definitions of standardized data elements. In January 2016 we reported that ensuring that data definitions are generally consistent with leading practices is important because limitations with the definitions could lead to inconsistent or inaccurate reporting, among other issues. We also reported that although the standardized data element definitions issued by OMB largely adhered to leading practices for establishing data definitions, several definitions had limitations that could lead to inconsistent reporting. While OMB’s additional guidance addresses some of the limitations we identified, it does not address all the clarity issues we identified. Specifically, OMB’s additional guidance addresses (1) reporting financial and award level data, (2) establishing linkage between agency award and financial systems using a unique award identifier, and (3) providing assurances that data submitted to Treasury for publication on USASpending.gov is sufficiently valid and reliable. For example, OMB’s Management Procedures Memorandum No. 2016-03 directs agencies to leverage existing procedures for providing assurances of the quality of their DATA Act data submissions and directs agency SAOs to provide reasonable assurance that their internal controls support the reliability and validity of the data submitted to Treasury for publication on USASpending.gov. OMB’s memorandum notes that assurance means that, at a minimum, the data reported are based on appropriate internal control and risk management strategies identified in OMB Circular A- 123. OMB expects SAO assurance of the data through this process would mean that data submitted to Treasury by May 2017 complies with existing controls for ensuring the data quality. However, our prior work has shown that relying on these quality assurance processes is not sufficient to address the accuracy and completeness challenges that we have previously identified. Additionally, as we reported in August 2016, Offices of Inspector General, which are required to assess the completeness, timeliness, quality, and accuracy of data submitted under the act, have expressed concerns about agencies’ abilities to provide assurances of the quality of their data. The inspectors general are particularly concerned about their agencies’ ability to provide quality assurances for data that are not directly provided by the agency, such as data submitted by non-federal entities who receive federal awards. To address these concerns, OMB released draft guidance in August 2016 that specifies DATA Act reporting responsibilities when an intragovernmental transfer (both allocation transfers and buy/sell transfers) is involved, explains how to report financial assistance awards with personally identifiable information (PII), and clarifies the SAO assurance process over the data submitted to the broker. OMB staff told us that this most recent policy guidance was drafted in response to questions and concerns reported by agencies in their implementation plan updates, as well in meetings with senior OMB and Treasury officials intended to assess agency implementation status. Among other challenges, agencies indicated the need for additional guidance on reporting intergovernmental transfers, providing assurances over their data, and reporting insurance information. For example, officials from USDA, one of our case example agencies, told us that they are waiting for guidance on insurance and indemnity reporting, but no guidance has been issued. Absent any new guidance, they plan to report insurance as they have under the Federal Funding and Accountability and Transparency Act of 2006 (FFATA). OMB staff told us that they received feedback from 30 different agencies and reviewed over 200 comments on the draft guidance. The final guidance, OMB M-17-04, was issued on November 4, 2016. Although OMB has made some progress with these efforts, other data definitions lack clarity—including primary place of performance and award description—which still needs to be addressed to ensure agencies report consistent and comparable data. These challenges, as well as the challenges identified by agencies, underscore the need for OMB and Treasury to fully address our prior recommendation to provide agencies with additional guidance to address potential clarity issues. OMB staff told us that the newly established Data Standards Committee will be responsible for developing guidance to provide additional operational clarity regarding these data definitions; however, they were unable to provide a specific timeframe for when this would be done. Treasury released the schema version 1.0 on April 29, 2016—4 months later than planned and approximately a year before reporting is required to begin under the act. The schema version 1.0 is intended to standardize the way financial assistance awards, contracts, and other financial data will be collected and reported under the DATA Act. Treasury expects the guidance provided in the schema version 1.0 will provide a stable base for agencies to develop the necessary data submission procedures. We have previously reported that a significant delay in releasing version 1.0 of the schema would likely have consequences for timely implementation of the act. Agencies are using schema version 1.0 to plan what changes are needed to systems and business processes to be able to capture and submit the required data. Under the act, agencies must report data in compliance with established standards by May 2017. Toward that end, OMB and Treasury have directed agencies to begin submitting data by the beginning of the second quarter of fiscal year 2017 (January 2017) with the intention of publically reporting that data by May 2017. OMB’s summary of agencies’ implementation plan updates acknowledged that the delay in the release of schema version 1.0 delayed agency timelines for implementation. This document also recognized that the iterative approach being used to develop and release guidance has posed challenges for some agencies as changes in the guidance may require them to re-work some of their implementation project plans. Our analysis of the implementation plan updates submitted by the agencies to OMB confirms this. We found that 11 of the 24 CFO Act agencies highlighted challenges related to the guidance provided by OMB and Treasury in their implementation plan updates. One of the commonly cited challenges concerned complications arising due to the iterative nature or late release of the guidance. For example, one agency reported that developing its implementation plan was highly dependent upon the concurrent development of the schema version 1.0 and technical guidance being developed by Treasury. This agency stated that any delays or changes to these components will significantly affect its solution design, development and testing schedule, and cost estimate. A key component of the reporting framework laid out in the schema version 1.0 is the DATA Act Broker, a system to standardize data formatting and assist reporting agencies in validating their data prior to submitting it to Treasury. See figure 1 for a depiction of how Treasury expects the broker to operate. Treasury’s software development team has been iteratively testing and developing the broker using what Treasury describes as an agile development process. Treasury released the first version of the broker in spring 2016 and it continues to develop the system’s capabilities through 2-week software development cycles, called sprints. On September 30, 2016, Treasury released a version of the broker, which it stated was fully capable of performing the key functions of extracting and validating agency data. Treasury officials told us that although they plan to continue to refine the broker to improve its functionality and overall user experience, they have no plans to alter these key functions. According to Treasury guidance documents, agencies are expected to use the broker to upload three files containing data pulled from the agencies’ internal financial and award management systems. These files will undergo two types of validation checks in the broker before being submitted to Treasury: data element validations and complex validations. Data element validations check whether data elements comply with specific format requirements such as field type and character length. Complex validations perform tasks such as checking data against other sources or using calculation rules to verify whether certain data elements sum up to each other. Treasury has configured these complex validation rules so that if a rule is not met, the broker can either produce a warning message while still accepting the data for submission or produce a fatal error, which prevents submission of the data altogether. As of September 30, 2016, data uploaded to the broker needs to successfully meet less than a quarter of these complex validation checks in order to be accepted for submission to Treasury. Treasury officials said that this choice was made in order to allow agencies more flexibility to test the broker, and that the data submissions will be required to pass more of these validation rules at a later date. According to Treasury documents, in a future release of the broker, data uploaded to the broker will need to successfully meet about half of the complex validation checks in order to be accepted for submission to Treasury. Treasury officials said that having about half of the validation rules produce warnings rather than fatal errors would provide agency officials with the flexibility to correct issues flagged by the broker or not to do so, depending on their knowledge of the context and situation of specific data elements. For example, for some of the programs, grants award-level information may not be reported for security reasons. In addition to assisting agencies in collecting and validating agency- generated data, the broker also extracts award and sub-award information from existing government-wide award reporting systems and helps ensure these files are in the standard format. This function was added during software development efforts in late September and early October 2016. Unlike the files submitted by agencies, these extracted files with award and sub-award information are not subject to any validations in the broker. However, Treasury implemented additional validation checks on the file containing agency financial assistance award information through its source system, the Award Submission Portal. These checks include verifying that required information is present and formatted correctly. Treasury officials told us that the responsibility for ensuring the accuracy of these files lies with the DATA Act SAO at each agency. For example, OMB Management Procedures Memorandum 2016-03 specifies that SAOs must provide reasonable assurance that their internal controls support the reliability and validity of the agency account-level and award- level data submitted to Treasury for publication. Before final submission of the data files in the broker, the SAO is responsible for assuring that, at a minimum, the data reported are based on appropriate internal control and risk management strategies identified in OMB Circular A-123. Treasury officials said that if SAOs are not able to provide this assurance, their agency will be prevented from submitting the files and their data will not be included in the data reporting based on the current broker design. Currently, the broker does not allow agencies to submit their data with qualifications, such as known quality limitations, so data that does not completely meet the criteria for SAO assurance will not be reported, even with qualifications. OMB staff and Treasury officials said that they are reconsidering this position and are exploring ways that agencies can submit data with qualifications and how these qualifications can be conveyed to the public. Agencies have made progress toward creating their data submissions and testing them in the broker, but work remains to be done before actual reporting can begin. Treasury has made empty sample files available to agencies so they can begin testing their data files in the broker without having completed building all of them. As of October 2016, 21 of 24 CFO Act agencies reported that they had begun testing their data files in the broker, but only the National Aeronautics and Space Administration had completed testing the broker and revised its data files accordingly. Treasury also collects data from the four shared service providers that are helping to manage data submissions for their agency clients. As of October 2016, two of these shared service providers reported to Treasury that they had finished building the data files for submission to the broker. In August 2016, we reported that agencies we reviewed are relying on a series of software patches from their enterprise resource planning (ERP) vendors to facilitate their data submissions. ERP vendors are developing patches that will extract data to help their clients develop files that comply with DATA Act requirements. According to vendors, these patches will help link an agency’s financial and award systems, create additional fields in existing systems to report new data elements, and extract data files formatted for submission to Treasury. Patches that will facilitate the generation of agency file submissions were planned to be completed between August 2016 and February 2017. As of September 2016, the release of one of these patches has been delayed. Oracle, one of the ERP vendors developing these patches, had planned to release a patch that would allow award attributes to be captured in their clients’ core purchasing systems and general ledger journals in August 2016, but the release was delayed until September 13, 2016. Representatives from SAP, another such ERP vendor, said that they were able to deliver one of the needed patches to their clients in August 2016 and an additional patch in October. But, they also said that changes and adjustments to the broker had delayed their progress towards creating a patch that can format their clients’ data files for submission. It will be more difficult for agencies that are relying on ERP vendor patches to test their data files in the broker until the patches have been implemented since the patches will enable them to construct and format their files for submission to the broker. Two agencies reported in their implementation plan updates that a delay in the release of the patches could jeopardize complete and timely data submission for May 2017. Treasury officials told us that agencies should still be able to create and submit the required files to the broker without these patches. These officials said that when designing the schema and broker, they chose to use a simple file format for data submissions so that agencies would be able to create these files without a specialized software solution. Treasury officials acknowledged that patches will make the submission process easier, but also pointed out that not every agency is able to take advantage of software patches. Some agencies reported in their implementation plan updates that they developed plans for interim solutions to construct these files until the patches can be developed, tested, and configured. However, some of these interim solutions rely on manual processing, which can be burdensome and increase the risk for errors. For example, USDA officials said that the effort to create an interim solution has been very resource intensive. This process involved surveying USDA’s bureaus to identify how their systems are configured and using that information to modify the financial system. HHS has also developed an interim reporting solution that can generate the required files without depending on a patch. However, HHS officials said this interim solution is complex and their processes cannot be fully automated until the Oracle patch is released. Furthermore, since these processes are not fully automated, they carry a risk of errors being introduced though human error. Agencies that are developing interim solutions will only have until May 2017 to test the data before the reporting deadline. An OMB document commended these agencies for developing robust contingency plans since this will better position them for timely implementation, but acknowledged that that long-term reporting solutions are still needed. As required by the DATA Act, OMB is conducting a pilot program, known as the Section 5 Pilot, aimed at developing recommendations for reducing reporting burden for grantees and contractors. The Section 5 Pilot has two primary focus areas—federal grants and federal contracts (procurements). OMB partnered with HHS to design and implement the grants portion of the pilot and with the General Services Administration (GSA) to implement the procurement portion. As the executing agent for the grants portion of the pilot, HHS developed six “test models” to evaluate different approaches to potentially reducing grantee reporting burden. On the procurement portion of the pilot, OMB’s Office of Federal Procurement Policy (OFPP) worked with GSA’s 18F to develop and test a proof of concept reporting portal for reports required by the Federal Acquisition Regulation (FAR) and is piloting it with the centralized reporting of certified payroll by contractors working construction projects in the United States. In March 2016, a revised plan describing the design of the grants portion of the pilot was released, which updated the November 2015 version we previously reviewed. This was followed, in July 2016, by a revised version of the design for the procurement portion. See table 1 for a summary of the test models components that comprise the grants and procurement portions of the Section 5 Pilot. We determined that the updated design for both portions of the Section 5 Pilot meets the statutory requirements for the pilot established under the DATA Act. Specifically, the DATA Act requires that the pilot program should include the following design features: (1) collect data during a 12- month reporting cycle; (2) include a diverse group of federal award recipients and, to the extent practicable, recipients that receive federal awards from multiple programs across multiple agencies; and (3) include a combination of federal contracts, grants, and subawards with an aggregate value between $1 billion and $2 billion. Based on our review of design documents as well as interviews with cognizant agency staff, there has been substantial improvement in this area since our last review, when the design lacked specifics in the procurement portion of the pilot, which made it difficult to determine whether the design of the overall pilot would meet these requirements. Both the grants and procurement portions of the pilot showed substantial improvements in the extent to which they reflect leading practices for pilot design (shown in the textbox below). We found that HHS’s March 2016 revised design for the grants portion of the pilot partly reflects all five of the leading practices for effective pilot design—an improvement from our prior assessment. For example, in our April 2016 review we found that the grants design lacked specific details regarding how the data will be analyzed and how conclusions will be reached about integrating the pilot activities into overall grant reporting efforts. Based on our feedback, OMB and HHS developed a plan to analyze survey and other data prior to the start of data analysis. This plan specifies the types of quantitative and qualitative data analysis HHS intends to conduct for each test model and how that assessment links back to the stated hypotheses. HHS also added a sampling plan and information on participant outreach efforts to the design of the grants portion of the pilot which helped it to meet the leading practices for effective pilot design. Leading Practices for Effective Pilot Design 1. Establish well-defined, appropriate, clear, and measurable objectives. 2. Clearly articulate an assessment methodology and data gathering strategy that addresses all components of the pilot program and includes key features of a sound plan. 3. Identify criteria or standards for identifying lessons about the pilot to inform decisions about scalability and whether, how, and when to integrate pilot activities into overall efforts. 4. Develop a detailed data-analysis plan to track the pilot program’s implementation and performance and evaluate the final results of the project and draw conclusions on whether, how, and when to integrate pilot activities into overall efforts. 5. Ensure appropriate two-way stakeholder communication and input at all stages of the pilot project, including design, implementation, data gathering, and assessment. OMB’s July 2016 revision of the design of the procurement portion of the Section 5 Pilot also showed substantial improvements in reflecting the leading practices for effective pilot design. Compared to the previous version, dated November 2015, we identified progress in several areas. For example, the revised procurement design identified hypotheses for each objective and contained objectives that were linked to metrics that should facilitate OMB’s ability to collect appropriate evaluation data. The revised design also provides additional details regarding the procurement portion’s intended assessment methodology. It specifies that participants will submit payroll information to the centralized test portal on a weekly basis and that OMB will use focus groups to collect qualitative data from agency staff that use these data for contract management and oversight purposes. Furthermore, the revised design includes a data-analysis plan that describes how OMB will collect, track, and analyze data produced by the pilot. Finally, the revised procurement design provides additional detail about how potential findings could be scalable from the experiences of the individual pilot participants to the larger population of contractors required to submit certified payroll reports in compliance with Davis Bacon requirements. Toward that end, the revised procurement pilot design contains a sampling plan that provides criteria for selecting a diverse group of participants. We found some areas where the revised procurement design does not fully reflect leading practices for effective pilot design. These largely relate to how OMB intends to broaden the pilot’s initial focus on centralizing certified payroll reporting to other types of FAR-required reporting. The procurement design presents a reasonable set of factors for why OMB decided to initially select certified payroll reporting for testing the potential usefulness of a centralized reporting portal to reduce reporting burden. However, the plan does not take the next step of clearly describing and documenting how findings related to centralized certified payroll reporting will be more broadly applicable to the many other types of required reporting under the FAR beyond citing general concepts such as data pre-population and system integration. More specifically, the current design lacks a plan for testing the assumption that the experiences contractors have with centralized certified payroll reporting will be similar when they use the system to meet different reporting requirements and other databases. This is of particular concern given the diversity of reporting requirements contained in the FAR. In fact, OMB staff have identified over 100 different types of FAR reporting requirements with different reporting frequencies, mechanisms, and required information. OMB staff told us that they expect to test the centralized portal on other types of FAR-required reporting and the revised design briefly mentions other FAR requirements such as those for service contracts and affirmative action plans. However, the revised design does not provide any details on how this will be done. The absence of an assessment methodology and an approach to test the scalability of the design when applied to procurement reporting requirements beyond certified payroll reporting is inconsistent with leading practices for pilot design and raises questions about whether the pilot design will meet its stated objective of reducing procurement reporting burden more broadly. HHS has taken a number of steps to begin implementing the design of the grants portion of the pilot. For example, they are recruiting participants for all of the test models and have begun administering data collection instruments for all of the test models. HHS has engaged in a number of outreach efforts to recruit participants for its test models. These officials told us that they have attended an estimated 70 events since 2015 to discuss the grants pilot, during which they provided information to interested attendees on how to get involved. Additionally, as of August 2016, HHS officials reported e-mailing almost 8,000 potential participants and plan on emailing additional prospects, if needed, in order to reach an established minimum number of participants for each test model. GSA’s 18F completed a prototype for the procurement portion of the Section 5 Pilot at the end of May 2016 and presented it to OMB in June 2016. 18F’s role was to explore how an electronic certified payroll reporting portal could reduce contractor burden for federal Davis-Bacon contracts. In August 2016, GSA’s Federal Acquisition Service, the implementation lead for the pilot, awarded NuAxis the contract to build the prototype from information obtained as a result of the 18F prototype process. GSA officials told us that starting in September 2016, NuAxis began developing a web-based reporting interface that will allow users to centrally enter and submit certified payroll data. They plan to make this interface compatible with other existing systems, such as the System for Award Management (SAM) and Wage Determination Online (WDOL) to access relevant data sources. In late November 2016, OMB staff and GSA officials informed us that they decided to delay launching the portal to conduct the procurement portion of the pilot in order to ensure that security procedures designed to protect personally identifiable information (PII) were in place. GSA officials told us that the centralized reporting portal that would be used to collect data on certified payroll did not receive the required Authority to Operate because it did not include necessary security measures to protect the PII that would be submitted by contractors participating in the pilot. Before the portal can be used to collect PII, GSA officials said they needed to issue a System of Records Notice and redesign the certified payroll reporting platform so that it conforms to agency security procedures. As a result of these additional steps, GSA officials expect to be able to begin collecting data through the centralized reporting portal sometime between late January 2017 and late February 2017. OMB staff said that despite the security-related delay, they still plan on collecting 12 months of data through the procurement pilot as required by the act. In order to meet the act’s requirement that OMB deliver a report to Congress on ways to reduce recipient reporting burden by August 2017, OMB staff told us that they plan to only include data collected up to June or July 2017 in order to allow for sufficient time to analyze the results and incorporate them into the report’s findings. However, these staff said that that they plan to continue to collect data through the procurement portion of the pilot until they obtain a full 12 months of contractors’ experiences with centralized payroll reporting. Afterwards, OMB plans to analyze this data, compare it to the smaller data set produced for the August 2017 report to Congress and, if necessary, make any needed revisions to the findings and recommendations contained in the report previously submitted to Congress. Across the federal government, agencies have efforts under way to implement the DATA Act by the May 2017 deadline and the success of these efforts will depend on, among other things, OMB and Treasury’s efforts to address agency-reported challenges and build an infrastructure to effectively support government-wide implementation. OMB and Treasury have made progress but still need to fully address the recommendations we have made in our previous reports. For example, OMB and Treasury can build upon the initial step of establishing a data standards committee responsible for maintaining already established standards and identifying new standards towards the goal of establishing an institutionalized system of data management that follows key practices and ensures the integrity of the data standards over time. In this context, implementing our prior recommendations will be critical to OMB’s and Treasury’s progress. Among the areas where progress has been made in setting a foundation for successfully implementing the act is the Section 5 Pilot to reduce reporting burden. In particular, the design of the procurement portion of the pilot has improved substantially, including the extent to which it reflects leading practices of pilot design. However, despite advances in several areas, the current design remains limited by its lack of specifics regarding how a pilot focused on assessing contractors’ experiences with a centralized portal designed for certified payroll reporting will be applicable to many other federal procurement reporting requirements. By addressing issues such as this and continuing to focus on implementing the act, the administration greatly increases the likelihood of creating a system that will achieve the goals of the act—to increase the transparency of financial information and improve the usefulness of that data to Congress, federal managers, and the American people. In order to ensure that the procurement portion of the Section 5 Pilot better reflects leading practices for effective pilot design, we recommend that the Director of OMB clearly document in the pilot’s design how data collected through the centralized certified payroll reporting portal will be used to test hypotheses related to reducing reporting burden involving other procurement reporting requirements. This should include documenting the extent to which recommendations based on data collected for certified payroll reporting would be scalable to other FAR- required reporting and providing additional details about the methodology that would be used to assess this expanded capability in the future. We provided a draft of this report to the Secretaries of Agriculture, Health and Human Services, and the Treasury; the Director of OMB; the Chief Executive Officer of the Corporation of National and Community Service; and the Administrator of the General Services Administration for review and comment. OMB, Treasury, CNCS, HHS, and GSA provided us with technical comments which we incorporated as appropriate. USDA had no comments. OMB and Treasury also provided written comments which are summarized below and reproduced in appendices IV and V, respectively. In written comments submitted to us, OMB provided an overview of their implementation efforts since the passage of the DATA Act. These efforts include issuing three memoranda providing implementation guidance to federal agencies; finalizing 57 data standards for use on USASpending.gov; establishing the Data Standards Committee to develop and maintain standards for federal spending; and developing and executing the Section 5 pilot. The OMB response also noted that OMB and Treasury met with each of the 24 CFO Act agencies to discuss their implementation timelines, risks, and mitigation strategies, and took steps to address issues that could affect successful implementation. Through these meetings, OMB staff learned that 19 of the 24 CFO Act agencies expect that they will fully meet the May 2017 deadline for DATA Act implementation. OMB neither agreed nor disagreed with GAO’s recommendation. In their written response, Treasury provided an overview of the steps they have taken to implement the DATA Act’s requirements and assist agencies in meeting their requirements under the act including OMB’s and Treasury’s issuance of uniform data standards, Treasury’s DATA Act Implementation Playbook, version 2.0, and the DATA Act Information Model Schema version 1.0. The Treasury response also noted that as a result of the aggressive implementation timelines specified in the act and the complexity associated with linking hundreds of disconnected data elements across the federal government, they made the decision to use an iterative approach to provide incremental technical guidance to agencies. According to Treasury, among other things, this iterative approach enabled agencies and other key stakeholders to provide feedback and contribute to improving the technical guidance and the public website. We are sending copies of this report to the Secretaries of Agriculture, Health and Human Services, and the Treasury; the Director of OMB; the Chief Executive Officer of the Corporation of National and Community Service; the Administrator of the General Services Administration; as well as interested congressional committees and other interested parties. This report will be available at no charge on our website at http://www.gao.gov.If you or your staff has any questions about this report, please contact J. Christopher Mihm at (202) 512-6806 or [email protected] or Paula M. Rascona at (202) 512-9816 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of our report. Key contributors to this report are listed in appendix IV. This review is part of an ongoing effort to provide interim reports on the progress being made in implementing the Digital Accountability and Transparency Act of 2014 (DATA Act), while also meeting the reporting requirements for us mandated by the act. This report examines: (1) steps taken to establish a clear data governance structure which is particularly important during the upcoming transition to a new administration; (2) challenges reported by Chief Financial Officers Act of 1990 (CFO Act) agencies in their implementation plan updates; (3) the operationalization of government-wide data standards and the technical specifications for data reporting; and (4) updated designs for the Section 5 Pilot for reducing recipient reporting burden and progress made in its implementation. To describe the Office of Management and Budget’s (OMB) and the Department of the Treasury’s (Treasury) efforts to implement a data governance structure for the DATA Act, we identified common key practices for establishing effective data governance structures. To identify key practices for data governance we reviewed our past reports to identify applicable laws, regulations, and guidance, as well as reports from other entities that could inform our work. To select the sources we used to identify key practices for establishing an effective data governance program, we identified organizations that had data governance expertise, had previously published work on data governance, were frequently cited as a primary source, or some combination of these qualifications. In addition, because the DATA Act requires that established data standards incorporate widely accepted common data elements such as those developed by international voluntary consensus standards bodies, federal agencies with authority over contracting and financial assistance, and accounting standards organizations, we selected a range of organizations, including domestic and international standards-setting organizations, industry groups or associations, and federal agencies, to ensure we had a comprehensive understanding of data governance key practices across several domains. All of the organizations we identified endorse establishing and using a governance structure to oversee how data standards, digital content, and other data assets are developed, managed and implemented. Based on these selection factors we drew on work from the following organizations to help us identify data governance key practices: American Institute of Certified Public Accountants, American National Standards Institute, Carnegie-Mellon University-Software Engineering Institute, Data Governance Institute, Data Management Association International, Oracle, National Association of State Chief Information Officers, National Institute of Standards and Technology, Digital Services Advisory Group and the Department of Education-Privacy Technical Assistance Center. We also met with OMB and Treasury to obtain information on the status of their efforts to address our previous recommendation that they establish a data governance structure. To determine the implementation challenges reported by CFO Act agencies in their DATA Act implementation plan updates, we requested and received the updates from the 24 CFO Act agencies. We reviewed these implementation plan updates and assessed the information against OMB’s requirements and the revised guidance in Treasury’s DATA Act Implementation Playbook (Version 2.0) (Playbook 2.0) to determine whether the updates contained the information required by OMB—(1) an updated timeline and milestones with an explanation of the agency’s progress to date and the remaining actions it would take to implement the act in accordance with the suggested steps in Playbook 2.0, (2) costs to date and estimated total future costs, and (3) an explanation of any new challenges and mitigation strategies. We analyzed the agency-reported challenges and mitigating strategies and categorized them. We compared the categories of challenges reported by the CFO Act agencies in their implementation plan updates to the challenges that had been reported in their initial implementation plans in 2015 to identify any new categories of challenges. We interviewed cognizant OMB and Treasury officials and obtained any supporting documentation to further understand the implementation challenges reported by agencies in their implementation plan updates and OMB and Treasury’s processes and controls for reviewing the updated implementation information and monitoring agencies’ progress. We also met with OMB and Treasury to obtain information on the status of efforts to address our previous recommendations related to agency implementation plans. To assess efforts to date to operationalize government-wide standards we reviewed OMB policy guidance intended to facilitate agency reporting as well as guidance intended to respond to agency requests that OMB clarify how to report specific transactions. We also interviewed OMB staff and Treasury officials to obtain information about plans for additional guidance as well as to assess the extent to which issued guidance is responsive to agency questions, requests for additional clarity on their reporting requirements, or both. We met with OMB and Treasury to obtain information on the status of efforts to address our previous recommendation related to the provision of policy guidance. To examine the technical structure and specifications for reporting, we assessed Treasury’s processes for developing technical guidance and reviewed applicable technical documentation related to the schema version, 1.0 and the broker. We reviewed the broker made available by Treasury through open source code posted on a public website (GitHub repositories associated with the DATA Act) in order to understand its functionality and validations. In addition, we observed several demonstrations of how agencies submit their data to a prototype of the broker and the feedback produced by the system regarding data verification. We also interviewed knowledgeable officials from OMB, Treasury, and selected federal agencies and inspectors general, as well as enterprise resource planning (ERP) vendors assisting federal agencies with technical implementation. To obtain specific information on how agencies use the technical guidance, we selected three agencies based on whether they were in compliance with existing federal requirements for federal financial management systems, the type of federal funding provided (such as grants, loans, or procurements), and their status as a federal shared service provider for financial management. Based on these selection factors, we chose the Department of Health and Human Services (HHS), the Department of Agriculture (USDA), and the Corporation for National and Community Service (CNCS). Although the information obtained from these three agencies is not generalizable to all agencies, they illustrate a range of conditions under which agencies are implementing the act. These are the same three agencies we selected for our January 2016 and August 2016 reports. This allowed us to assess progress in DATA Act implementation at these agencies since our last review. At each agency, we reviewed DATA Act implementation plan updates and interviewed officials responsible for implementation and DATA Act implementation team members. We met with OMB and Treasury to obtain information on the status of efforts to address our recommendation related to providing technical guidance. To assess whether the Section 5 pilot designs meet statutory design requirements, we reviewed Section 5 of the Federal Funding Accountability and Transparency Act of 2006, as amended by the DATA Act, to understand the deadlines and design requirements. We reviewed the draft design documents to assess OMB and its partners’ plans for meeting these requirements. To supplement our review of those plans, we also spoke with cognizant staff implementing these pilots at OMB, HHS, and General Services Administration. To assess the extent to which the Section 5 pilot designs adhered to leading practices for effective pilot design, we reviewed the documented designs for both the grants and procurement portions of the pilot. To evaluate the grants portion of the pilot, we reviewed the draft design document from March 2016 as well as data collection instruments such as surveys and quizzes. We supplemented our assessment with information HHS officials provided to us during subsequent interviews, as appropriate. For the procurement portion, we reviewed the draft design document from July 2016. Additionally, we supplemented our assessment with information officials from OMB’s Office of Federal Procurement Policy (OFPP) provided to us during subsequent interviews, as appropriate. To assess the grants and procurement portions of the pilot, we applied the five leading practices for effective pilot design we identified to both portion’s design documents. Each of these analyst assessments were subsequently verified by a second analyst. We determined that the design met the criteria when we saw evidence that all aspects of a leading practice were met. When we were unable to assess whether all aspects of a leading practice were met, we determined that the design partially met the criteria. Finally, when we saw no evidence of a leading practice, or if we identified a critical gap or shortcoming related to the practice, we determined that the criteria were not met. In continuation of our constructive engagement approach for working with agencies implementing the DATA Act, we provided HHS and OMB with feedback on the design of the grants and procurement portions of the pilot during our review. These officials generally accepted our feedback as useful and, in some instances, noted that they have or planned to make changes to their design as a result of our input. We also met with OMB to obtain information on the status of efforts to address our recommendation related to the design of the pilot for reducing recipient reporting burden. We conducted the work upon which this report is based from May 2016 to December 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In May 2015, the Office of Management and Budget (OMB) directed federal agencies to submit Digital Accountability and Transparency Act of 2014 (DATA Act) implementation plans to OMB concurrent with the agencies’ fiscal year 2017 budget requests. In June 2015, the Department of the Treasury (Treasury) issued guidance—DATA Act Implementation Playbook (Version 1.0)—to help agencies prepare their implementation plans. We reviewed these implementation plans and on July 29, 2016, we issued a report on the results of our review. In June 2016, OMB directed Chief Financial Officers Act of 1990 (CFO Act) agencies to submit updates to their initial DATA Act implementation plans by August 12, 2016. The updates were to (1) update timeline and milestones and explain the agency’s progress to date and the remaining actions it would take to implement the act in accordance with the suggested steps in the DATA Act Implementation Playbook (Version 2.0) (Playbook 2.0), (2) report costs to date and estimated total future costs, and (3) explain any new challenges and mitigation strategies. Treasury’s DATA Act Implementation Playbook (Version 1.0) contained eight suggested steps and a timeline for agencies to use as they began to develop their DATA Act implementation plans. Steps 1 through 4 were to be completed by September 2015. However, as of October 2016, only 16 of the 24 CFO Act agencies reported that they had completed all steps 1 through 4. For example, there were four agencies reporting that they had not completed their inventory of data and identified the gaps in systems and processes for data elements (step 3). DATA Act Implementation Playbook (Version 1.0) indicated that agencies would be working on steps 5 through 8 throughout fiscal years 2016 and 2017. Playbook 2.0—issued June 24, 2016—includes, among other things, expanded guidance on actions to be included in steps 5 through 8. Playbook 2.0 did not include expected timeframes for agencies to complete each step, rather, it referred agencies to Treasury’s implementation roadmap, which includes high level milestones for Treasury’s technical deliverables. Playbook 2.0 states that agencies can use the milestones in Treasury’s implementation roadmap to help determine their own implementation milestones. Descriptions of steps 5 through 8 from Playbook 2.0 follow: Step 5: Prepare Data for Submission to the Broker. This step involves reviewing the schema version 1.0, extracting data from source systems, mapping agency data to the schema version 1.0, and implementing system changes as needed to collect and link data. Step 6: Test Broker Outputs and Ensure Data are Valid. Agencies may use the broker to verify the data files the agency plans to submit to Treasury. The broker uses validation rules to test the completeness and accuracy of the data elements and linkages between financial and award data. The broker also tests whether the data passes basic validations within the schema version 1.0. Step 7: Update Data. This step involves updating information and systems. If data does not pass validation (see Step 6), the broker will provide error details to the agency. The agency should then reference the authoritative data sources and address the discrepancies. Step 8: Submit Data. Once the data is linked, validated, and standardized, agencies are to submit the data to Treasury for posting on USASpending.gov or a successor system. Agency senior accountable officials (SAO) are to provide reasonable assurance that their internal controls support the reliability and validity of the agency account-level and award-level data they submit to Treasury. This assurance is to be provided quarterly with data submissions beginning with fiscal year 2017 second quarter data. The SAO assurance means, at a minimum, that data reported are based on appropriate internal controls and risk management strategies identified in OMB Circular A-123. Table 3 shows the information that OMB required CFO Act agencies to include in their implementation plan updates, information on remaining actions the agencies should take to implement suggested steps 5 through 8 in Playbook 2.0, and the number of CFO Act agencies that included the information. Table 4 describes the categories of challenges reported by 19 of the 24 CFO Act agencies in their implementation plan updates and the number of agencies reporting challenges in each category. Five CFO Act agencies did not identify any challenges in their implementation plan updates. Table 5 describes the mitigating strategies reported by 16 of the 24 CFO Act agencies in their implementation plan updates and the number of agencies reporting mitigating strategies in each category. In addition to the above contacts, Peter Del Toro (Assistant Director), Michael LaForge (Assistant Director), Kathleen Drennan (analyst-in- charge), Diane Morris (analyst-in-charge), Michelle Sager, Shirley Hwang, Aaron Colsher, Katherine Morris, Sophia Tan, Thomas Hackney, Charles Jones, Laura Pacheco, Maria Belaval, Carrol Warfield, Jr., Mark Canter, James Sweetman, Jr., Andrew J. Stephens, Carl Ramirez and Jenny Chanley made major contributions to this report. Additional members of GAO’s DATA Act Internal Working Group also contributed to the development of this report.
Effective implementation of the DATA Act will allow federal funds to be better tracked and greatly increase the types of data made publicly available. OMB and Treasury have taken significant steps to implement the act, but challenges remain as the critical deadline of May 2017 approaches. Consistent with GAO's mandate under the act, this report is one in a series of products GAO will provide to Congress providing oversight of DATA Act implementation. This report examines (1) steps taken to establish a clear data governance structure which is important during the upcoming transition to a new administration, (2) challenges reported by major agencies in their implementation plan updates, (3) the operationalization of government-wide data standards and technical specifications for data reporting, and (4) updated designs for the Section 5 pilot for reducing recipient reporting burden and progress made in its implementation. GAO reviewed key implementation documents, compared the Section 5 Pilot to leading practices, and interviewed staff at OMB, Treasury, and other selected agencies. Data governance and the transition to the new administration. Consistent with a July 2015 GAO recommendation to establish clear policies and processes that follow leading practices for data governance under the Digital Accountability and Transparency Act of 2014 (DATA Act), the Office of Management and Budget (OMB) and the Department of the Treasury (Treasury) have taken the initial step of convening a committee to maintain established standards and identify new standards. Although this represents progress, more needs to be done to establish a data governance structure that is consistent with key practices to ensure the integrity of standards over time. The upcoming transition to a new administration presents risks to implementing the DATA Act, potentially including shifted priorities or lost momentum. The lack of a data governance structure for managing efforts going forward jeopardizes the ability to sustain progress as priorities shift over time. Implementation plan updates. The 24 Chief Financial Officers Act agencies continue to face challenges implementing the DATA Act, according to information in their implementation plan updates. GAO identified four categories of challenges reported by agencies that may impede their ability to implement the act: systems integration issues, lack of resources, evolving and complex reporting requirements, and inadequate guidance. To address these challenges, agencies reported changing internal policies and procedures; leveraging existing resources; and using external resources, and manual and temporary workarounds, among other actions. Operationalizing data standards and technical specifications for data reporting. OMB issued additional guidance on how agencies should report data involving specific transactions, such as intragovernmental transfers, and how agencies should provide quality assurances for submitted data. However, this guidance does not provide sufficient detail in areas such as the process for providing assurance on data submissions and it does not address others, such as how agencies should operationalize the definitions for data elements (e.g., primary place of performance and award description). Treasury released a new version of the DATA Act Broker—a system that collects and validates agency data—in October 2016 and is making minor adjustments to its functionality. Agencies have reported making progress creating and testing their data submissions, but some report needing to rely on interim solutions for initial reporting while they wait for automated processes to be developed. Pilot to reduce recipient reporting burden. GAO reviewed the revised design for both the grants and procurement portions of the pilot and found that they partly met each of the leading practices for effective pilot design. Although this represented significant progress since April 2016, GAO identified an area where further improvement is still needed. Specifically, the plan for the procurement portion of the pilot does not clearly document how findings related to the centralized certified payroll reporting portal will be applicable to other types of required procurement reporting. This is a particular concern given the diversity of federal procurement reporting requirements. To date, all six components of the grant portion are underway. Data collection for the procurement portion is delayed and is not expected to begin until January or February 28, 2017. GAO is making one new recommendation: that for the Section 5 Pilot, OMB clearly document in its design of the procurement portion how data collected through the centralized certified payroll reporting portal will be applied to other required procurement reporting. Moving forward, additional progress needs to be made to address GAO's 11 previous DATA Act recommendations that remain open. OMB neither agreed nor disagreed with GAO's recommendation.
OPM may waive application of the salary offset requirement for reemployed annuitants—thereby permitting dual compensation—pursuant to 5 U.S.C. § 8344(i) and § 8468(f). Under these authorities, agencies may request that OPM waive the offset requirement for individuals on a case-by-case basis, or, that OPM delegate to the agency head the authority to waive the offset requirement for individuals on a case-by-case basis. The statutes provide OPM the authority to consider and approve agencies’ waiver requests. OPM evaluates and may approve agency requests for waivers on a case-by-case basis for four authorized purposes: need to retain an individual, and other unusual circumstances. OPM evaluates and may approve agency requests for delegation of authority to waive salary offsets on a case-by-case basis for two authorized purposes: other unusual circumstances. Agencies requesting waivers or waiver authority must provide purpose- specific information about the circumstances for which the waiver or authority is requested, as summarized in table 1. In order to obtain delegated authority to waive salary offsets, an agency must describe an emergency or other unusual circumstance comparable to the third and fourth categories above. In addition to the purpose- specific information, agencies must identify the occupations, grades, and locations of positions that might be filled under the delegated authority and provide a statement of expected duration of reemployment to be approved under the requested authority. For individual requests, agencies must identify the individual for whom the exception is requested, the appointing authority to be used, and the position to which he or she will be appointed. In addition, the agency must demonstrate that the annuitant will not agree to reemployment without the waiver. Agencies may also seek to extend previously granted delegated authority. Under 5 C.F.R. 553.201(g), agencies may also request extensions for previously granted individual waivers. Agencies must show that the conditions justifying the original waiver still exist. During this process, OPM asks the agency for the reason why an extension is needed and why other staffing options were unavailable. Should the delegated agreement between OPM and the agency permit the agency to renew individual waivers granted by the agency, the agency may grant an extension of individual waivers. There are other authorities permitting dual compensation for which OPM does not exercise approval authority or otherwise regulate. For example, government-wide waiver authority was contained in the National Defense Authorization Act (NDAA) for Fiscal Year 2010 and allows agencies to waive offset requirements on a temporary basis. Although that authority was set to expire in October 2014, Congress recently passed an extension of that authority through December 2019. In 2012, we reported on the use of the NDAA authority and a few examples of other waiver authorities which OPM does not manage, including waiver authorities unique to Foreign Service annuitants and the Nuclear Regulatory Commission. Additionally, the Department of Defense (DOD) does not seek waiver approval from OPM because DOD has its own authority which permits the reemployment of annuitants without subjecting salaries to offset. Our analysis of OPM’s data indicate that agencies’ use of reemployed annuitants has increased, with the number of on-board uniformed and civil service annuitants rising from over 95,000 in September 2004 to around 171,000 in September 2013 (from about 5 percent to 8 percent of the federal workforce). This is inclusive of reemployed annuitants with and without dual compensation waivers, as well as retired uniformed service members whose retirement or retainer pay is not subject to reduction. More than half of these reemployed civilian annuitants, specifically DOD’s civil service reemployed annuitants, would not be covered under OPM’s waiver authority. DOD accounted for 83 percent of the increase in annuitants from 2004 to 2013— of this increase, approximately 3 percent were civil service annuitants and about 98 percent were uniformed service annuitants. In comparison, our analysis of OPM’s data found that the overall size of the permanent career federal workforce, as reflected in the number of employees in the 24 Chief Financial Officers (CFO) Act agencies, increased about 11 percent over the same period and that DOD accounted for about 40 percent of this increase. The increase in reemployed annuitants reflects agencies’ greater reliance on all types of annuitants, including former uniformed service members covered by military pension systems as well as retirees from federal civilian service covered by the Federal Employees’ Retirement System (FERS) and the Civil Service Retirement System (CSRS). Use of annuitants was concentrated in the three largest agencies—DOD, Veterans Affairs (VA), and Homeland Security (DHS)— which collectively employed about 92 percent of annuitants in 2013, with nearly 80 percent employed at DOD alone. Most of the increased reliance on annuitants in DOD is tied to reemployment of uniformed service members: 98 percent of DOD’s annuitants were retired uniformed service members. Our analysis also indicates that the other 21 CFO Act agencies saw increases in reemployed annuitants, as well. For these agencies, the number of on-board annuitants increased from over 23,000 in 2004 to about 36,000 in 2013 (about 2 to 3 percent of these agencies’ workforce). Although these agencies collectively relied more heavily on civil service annuitants than VA and DHS, reemployed uniformed service members still comprised nearly 79 percent of the on-board annuitants among these agencies in 2013. Greater reliance on annuitants suggests recent losses in key staff and institutional knowledge due to retirement. The number of voluntary retirements at the 24 CFO Act agencies increased in recent years, from 41,735 employees in 2004 to 58,313 in 2013 (2.4 to 3 percent of these agencies’ workforce). In addition, many of these agencies experienced hiring freezes between 2011and 2013, limiting their options for replacing staff who retired or separated for other reasons. In response to these circumstances and the increasing size of the retirement eligible workforce—about 30 percent eligible to retire in five years—agencies appear to have turned to annuitants to bridge potential staffing gaps.Figure 1 shows the number of annuitants in the federal workforce from 2004 to 2013. Our analysis of OPM data shows that in 2013, 83 percent of on-board annuitants were in administrative, technical, or professional occupations, which include positions related to administration and management, information technology, and engineering, among others. Among civil service annuitants on-board in 2013, aggregate annualized salaries were highest in DOD, at $113.4 million (0.2 percent of DOD employees’ aggregate salaries) compared to $246.6 million among the other CFO act agencies collectively (or 0.2 percent of employees’ aggregate salaries). Among uniformed service annuitants on-board in 2013, aggregate annualized salaries were also highest in DOD, at $10.2 billion (18.9 percent of DOD employees’ aggregate salaries) compared to $2.3 billion among the other CFO Act agencies collectively (2.3 percent of employees’ aggregate salaries). Similar to career employees, reemployed annuitants generally had full-time schedules. However, most civil service annuitants were also on term limited appointments, generally serving from one to five years after retirement. Figure 2 shows the salary costs of annuitants as a percentage of agencies aggregate annualized salary rates. OPM officials said that they do not conduct trend analysis of dual compensation waiver requests because each waiver is so unique that there is no trend or pattern to analyze. However, in our review of the small sample of 16 waiver request submissions provided by OPM, we found that “other unusual circumstance” was among the most often cited reasons for requesting a waiver and agencies were requesting waivers for individuals in administrative or professional occupations. An example of an unusual circumstance cited by agencies requesting a delegation is an urgent need to rehire annuitants to support the hiring of critical staff. For example, DHS cited a need to rehire investigative program specialists to support the hiring of law enforcement officers to meet a Congressional mandate. In another example, OPM cited a need to hire retired judges to help review applications for administrative law judge vacancies. This suggests that there may be some benefit to conducting analysis of these waivers because there may be trends that OPM is currently not aware of. While there is no specific statutory requirement for OPM to conduct trend analysis, without such analysis, OPM may be missing opportunities to analyze this information that can help guide the human capital management tools and guidance it develops and provides to agencies government-wide. Ensuring OPM is identifying challenges and assisting agencies as issues emerge is especially important given the increasing number of retirement-eligible employees across the federal government. As we have previously reported, unanticipated retirements could cause skills gaps to further widen and adversely impact the ability of agencies’ to carry out their diverse responsibilities. With regard to guidance provided to agencies, OPM officials said that they occasionally identify or provide other tools or resources for human capital workforce management to agencies requesting waivers, but they do not do so routinely. For example, OPM may provide information on advertising tools or other resources to agencies experiencing difficulty hiring qualified candidates. OPM officials said that agency officials submitting waiver requests are generally familiar with OPM’s tools or guidance. However, we have previously found that agencies’ chief human capital officers were either unfamiliar with some OPM tools or guidance, or found the tools or guidance fell short of their agencies’ needs. OPM officials said that on infrequent occasions they refer agencies that make repeated requests to extend dual compensation waivers to OPM’s workforce planning division for consultation on how to use its workforce management tools more strategically. As we have recently reported, in an era of limited fiscal resources, it is critical that OPM and agencies are developing and using the most cost-effective tools to ensure agencies can meet their missions. We found that OPM lacks effective policies and procedures for documenting waiver requests which may hamper its ability to conduct trend analysis. OPM officials said that they do not have a systematic and reliable process for maintaining dual compensation waiver documentation. Specifically, OPM officials said they do not have a standard policy for how dual compensation waivers are labeled or saved and, therefore must individually review thousands of electronic documents in their document management system database to identify the waiver records. Officials said the waiver requests and supporting materials are submitted to OPM and assigned to individuals for preliminary review and analysis. OPM staff save these in the document management system, but save documents inconsistently, sometimes merging the request and documentation and saving the evidence separately, without any standard labeling. Officials said that staff create a routing slip, called an executive decision summary, for each file. OPM staff use the routing slip to record the names of officials and dates of their review to recommend approval or denial. However the routing slip may or may not be saved with the corresponding waiver materials and does not include summary information about the waiver request. Federal internal control standards state that agencies should clearly document significant transactions and events and the documentation should be readily available for examination. These actions help organizations run their operations efficiently and effectively, report reliable information about their operations, and comply with applicable laws and regulations. Agencies can achieve this by developing and implementing policies ensuring accountability for records, appropriate documentation of transactions, and sufficient information and communication about programs. However, OPM does not have such a policy to guide its management of the dual compensation waiver files. As a result, OPM was unable to retrieve these files in a timely manner for our review. According to OPM officials, OPM does not monitor the agency’s implementation of an individual dual compensation waiver once a waiver is granted. The officials said OPM may require agencies to submit documentation before approving delegated waiver authority in order to determine whether the agency is complying with relevant requirements. OPM officials said OPM’s role is limited to application review and approval of dual compensation waiver requests and extensions, and that it does not have a role in their implementation or oversight. OPM officials also said individual and delegated waiver requests may be approved, pending specific actions first taken by the requesting agency. However, OPM officials said it is the requesting agency’s responsibility to ensure that it meets the conditions outlined in the dual compensation waiver approval letter. Officials said there was one exception—OPM requires and reviews evidence from agencies requesting approval to extend a previously granted waiver beyond the original term to determine if the circumstances justifying the waiver still exist. The statutory provisions authorizing OPM to grant individual and delegated waiver requests do not specifically require OPM to conduct oversight or monitoring of how agencies implement the authority granted by OPM. However, OPM is generally required to maintain oversight over delegated activities under 5 U.S.C. § 1104(b)(2). Accordingly, OPM regulations recognize the need for some oversight where OPM delegates waiver authority to an agency with no time limit on that grant of authority. In those instances, OPM regulations state that it may terminate an agency’s delegated authority if it determines that the circumstances justifying the delegation have changed substantially, or if the agency has failed to manage the authority in accordance with the law, regulations, or OPM officials stated that they do establish the terms of the agreement.time limits on delegation agreements and, in the one delegated waiver example OPM provided for our review, the waiver was authorized for a specific period. Given OPM’s document management challenges, as previously discussed, OPM was unable to provide us with a representative sample of waiver approval letters to determine whether OPM consistently established time limits on the delegation of waiver authority provided to agencies and, if not, whether there were instances where monitoring or oversight was necessary. Given the budgetary and long-term fiscal challenges facing the nation, agencies must identify options to meet their missions with fewer resources. While federal agencies shoulder this responsibility, OPM, through its authority to review and approve dual compensation waivers, as well as its responsibility to assist agencies with all aspects of human capital management, should identify trends in waiver use and develop cost-effective human capital tools and resources, where appropriate. These objectives cannot be achieved without analysis of dual compensation waiver information. However, OPM has not developed adequate policies and procedures for the management of dual compensation waiver documentation. Given the increasing use of reemployed annuitants and the impending wave of retirements, OPM is missing an opportunity to leverage the information gained through the review and approval of dual compensation waivers to inform and improve upon the assistance it provides federal agencies in their management of human capital. To improve OPM’s assistance to agencies and management of its dual compensation waiver program, we recommend that the Director of OPM take the following two actions: 1. Analyze dual compensation waivers to identify trends that can inform OPM’s human capital management tools. 2. Establish policies and procedures for documenting the dual compensation waiver review process. We provided a draft of this product to OPM for review and comment. In written comments, which are reprinted in appendix II, OPM did not concur with one recommendation and partially concurred with one. OPM also provided technical comments, which we incorporated as appropriate. OPM stated that it did not concur with our recommendation to analyze dual compensation waivers to identify trends that can inform OPM’s human capital management tools. OPM noted that the waivers are authorized for specific purposes and that the statue does not require OPM to conduct any trend analysis. OPM also noted that it does not grant a large number of waivers and that those patterns are identified when particular circumstances, like natural disasters prompt agencies to seek waivers for similar issues. As noted in the report, we agree that there are clearly defined purposes and that there is no statutory requirement for OPM to conduct a trend analysis. While our analysis did find that most of rehired annuitants were likely hired under an authority maintained by the Department of Defense, OPM was unable to provide evidence of the number of individual or delegated waivers that it had approved in any year, including currently active waivers. Further, given the likelihood of future agency requests for dual compensation waivers for natural disasters, the patterns OPM identified after Hurricane Katrina and potential lessons learned are evidence of the kind of insight that could be informing OPM’s other human capital management tools or resources. We continue to believe that OPM should analyze waivers and identify trends that could improve its other tools. OPM stated that it partially concurred with our recommendation to establish policies and procedures for documenting the dual compensation waiver review process. OPM noted that it has policies and procedures for adjudicating waivers and that it is in compliance with the National Archives and Records Administration policies. However, OPM was unable to provide evidence of any such policies and procedures. In fact, OPM could not demonstrate adherence to federal internal control standards stating agencies should clearly document significant transactions and events and the documentation should be readily available for examination. Further, while OPM was able to ultimately produce 16 waiver decision letters, it was unable to provide a single complete, agency waiver application along with the supporting documentation and corresponding OPM decision letter. OPM also could not identify the total number of waivers for any given time period, meaning that even if OPM individually reviewed the thousands of documents in its document management system, it would not know if all materials were maintained appropriately. We continue to believe that OPM should take action to fully address this recommendation and comply with federal internal control standards. We are sending copies of this report to the appropriate congressional committees and to the Director of the Office of Personnel Management. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 7 days from the report date. At that time, we will send copies to the Committee on Homeland Security and Government Affairs. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report please contact me at (202) 512-2717 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. To analyze reemployed annuitant trends, we used the Office of Personnel Management’s (OPM) Enterprise Human Resources Integration (EHRI) Statistical Data Mart, which contains personnel action and on board data for most federal civilian employees. We analyzed agency-level EHRI data for the 24 Chief Financial Officers (CFO) Act agencies, which represent the major departments (such as the Department of Defense) and most of the executive branch workforce. We analyzed EHRI data starting with fiscal year 2004 because personnel data for the Department of Homeland Security (which was formed in 2003) had stabilized by 2004. We selected 2013 as the endpoint because it was the most recent, complete fiscal year of data available during most of our review. We classified annuitants in two ways: 1. Military only annuitants (retired uniformed service officers or service enlisted members who are receiving retired or retainer pay).2. Military and Federal Employees’ Retirement System or Civil Service Retirement System annuitants (including individuals with all valid EHRI annuitant codes). We analyzed on-board trends for most of the executive branch workforce, including temporary and term-limited employees. However, we focused on career permanent employees in our analysis of separation trends and retirement eligibility because these employees comprise most of the federal workforce and become eligible to retire with a pension, for which temporary and term limited employees are ineligible. To calculate the number of federal civilian employees, we included all on board staff, regardless of their pay status. In addition, we excluded Foreign Service workers at the State Department since those employees were not included in OPM data for the years after 2004. We examined on-board and annuitant counts, voluntary separations, adjusted base pay, and retirement eligibility trends by agency, and occupation. Occupational categories include Professional, Administrative, Technical, Clerical, Blue Collar, and Other white-collar (PATCO) groupings and are defined by the educational requirements of the occupation and the subject matter and level of difficulty or responsibility of the work assigned. Occupations within each category are defined by OPM. To calculate voluntary separation rates, we added the number of career permanent employees with personnel actions indicating they had separated from federal service with either mandatory or voluntary retirement personnel actions and divided that by the 2-year on board average. To calculate retirement eligibility for the next 5 years, we computed the date at which the employee would be eligible for voluntary retirement at an unreduced annuity, using age at hire, years of service, birth date, and retirement plan coverage. We used the EHRI adjusted base pay to examine the annualized salaries of on-board individuals. It is important to note that this amount does not necessarily reflect the actual amount annuitants were paid in the fiscal year, but rather, the total annualized salary of annuitants in the data. We assessed the reliability of the EHRI data through electronic testing to identify missing data, out of range values, and logical inconsistencies. We also reviewed our prior work assessing the reliability of these data and interviewed OPM officials knowledgeable about the data to discuss the data’s accuracy and the steps OPM takes to ensure reliability. On the basis of this assessment, we believe the EHRI data we used are sufficiently reliable for the purpose of this report. To evaluate the extent to which OPM analyzes trends in the reasons for waiver requests and provides related guidance, we reviewed OPM’s policies and procedures for evaluating waiver requests, analyzed documentation from OPM, and interviewed officials. To evaluate the extent to which OPM ensures compliance with conditions under which the waivers were granted, we reviewed relevant statutes, regulations, OPM’s policies and procedures for reviewing waiver requests, and interviewed OPM officials. We also reviewed the 16 waiver decision letters that OPM was able to provide. According to OPM officials, the waivers were selected to represent examples of the types of requests for the different authorized waiver purposes. We were unable to assess whether the examples OPM provided to us were representative of the universe of waiver requests because of the conditions in which the files are maintained. We compared information gathered from reviewing these letters as well as interviews with OPM officials to the statutory and regulatory provisions, OPM’s policies and procedures, and internal controls standards for the federal government. In addition to the individual named above, Chelsa Gurkin (Assistant Director), Anthony Patterson (Analyst-in-Charge), Vida Awumey, Sara Daleski, Karin Fangman, Kimberly McGatlin, and Rebecca Shea made major contributions to this report.
The federal workforce has a large number of retirement-eligible employees that could potentially result in a loss of skills hindering federal agencies' ability to meet their missions. Agencies can mitigate this challenge by hiring uniformed and civil service retirees. Generally, when an agency reemploys a retired civil service employee, their salary rate is subject to offset by the amount of the annuity received. Upon request, OPM has authority to waive offsets, allowing dual compensation (annuity and full salary). Dual compensation is also permitted under other authorities not administered by OPM, such as the authority provided to Defense. GAO was asked to provide information on the use of rehired annuitants and OPM's dual compensation waiver authority. This report: (1) describes the trends in rehired annuitants for fiscal years 2004 to 2013; (2) identifies the extent to which OPM analyzes trends in the reasons for waiver requests, and provides guidance to agencies, and (3) evaluates the extent to which OPM ensures agencies' compliance with the conditions under which the waivers were granted. GAO analyzed OPM data, reviewed OPM documentation, and interviewed OPM officials. Agencies' use of reemployed annuitants has increased, with the number of on-board retired uniformed and civil service annuitants increasing from over 95,000 in fiscal year 2004 to around 171,000 in fiscal year 2013 (from about 5 percent to 8 percent of the federal workforce). This is inclusive of reemployed annuitants with and without dual compensation waivers. The Department of Defense (DOD) accounted for about 80 percent of rehired annuitants in 2013; ninety-eight percent of which were retired uniformed service members whose retirement pay is not subject to reduction. More than half of the total reemployed civilian annuitants in 2013, including DOD's civil service reemployed annuitants, would not be covered under the Office of Personnel Management's (OPM) dual compensation waiver authority. OPM officials said that they do not conduct trend analysis of dual compensation waiver requests and they provide related guidance only as needed. While OPM is not required to conduct trend analysis, given the increasing number of retirement-eligible federal employees, without such analysis OPM may be missing opportunities to analyze information that can inform decisions about the human capital management tools it develops and provides for agencies government-wide. OPM's ability to conduct trend analysis is limited by its lack of a systematic and reliable process for maintaining dual compensation waiver documentation. The lack of policies and procedures is inconsistent with federal internal control standards and made OPM unable to timely retrieve the documentation for GAO's review. OPM is not required by statute to monitor agencies' implementation of individual dual compensation waivers to determine whether relevant requirements are followed. OPM regulations provide for limited oversight in delegated situations, where waiver authority is delegated to agencies without a time limit. GAO recommends that OPM analyze trends in agencies' use of dual compensation waivers and establish policies and procedures for maintaining waiver documentation. OPM did not concur with the first and partially concurred with the second recommendation. GAO maintains that OPM should implement these actions as discussed in the report.
In fiscal year 2004, much of our work examined the effectiveness of the federal government’s day-to-day operations, such as administering benefits to the elderly and other needy populations, providing grants and loans to college students, and collecting taxes from businesses and individuals. Yet, we remained alert to emerging problems that demanded the attention of lawmakers and the public. For example, we continued to closely monitor developments affecting the Iraq war, defense transformation, homeland security, social security, health care, the U.S. Postal Service, civil service reform, and the nation’s private pension system. We also informed policymakers about long-term challenges facing the nation, such as the federal government’s financial condition and fiscal outlook, new security threats in the post-cold war world, the aging of America and its impact on our health care and retirement systems, changing economic conditions, and the increasing demands on our infrastructure—from highways to water systems. We provided congressional committees, members, and staff with up-to-date information in the form of reports, recommendations, testimonies, briefings, and expert comments on bills, laws, and other legal matters affecting the federal government. We performed this work in accordance with the GAO Strategic Plan for serving the Congress, consistent with our professional standards, and guided by our core values. See appendix I for our Strategic Plan Framework for serving the Congress and the nation. In fiscal year 2004, our work generated $44 billion in financial benefits, primarily from recommendations we made to agencies and the Congress (see fig. 1). Of this amount, about $27 billion resulted from changes to laws or regulations, $11 billion resulted from agency actions based on our recommendations to improve services to the public, and $6 billion resulted from improvements to core business processes, both governmentwide and at specific agencies, resulting from our work (see fig. 2). Our findings and recommendations produce measurable financial benefits for the federal government when the Congress or agencies act on them. The funds that are saved can then be made available to reduce government expenditures or be reallocated to other areas. The monetary effect realized can be the result of changes in business operations and activities; the structure of federal programs; or entitlements, taxes, or user fees. For example, financial benefits could result if the Congress were able to reduce its annual cost of operating a federal program or lessen the cost of a multiyear program or entitlement. Financial benefits could also result from increases in federal revenues—due to changes in laws, user fees, or sales—that our work helped to produce. Financial benefits included in our performance measures are net benefits—that is, estimates of financial benefits that have been reduced by the costs associated with taking the action that we recommended. Figure 3 lists several of our major financial benefits for fiscal year 2004 and briefly describes some of our work contributing to financial benefits. Many of the benefits that result from our work cannot be measured in dollar terms. During fiscal year 2004, we recorded a total of 1,197 other benefits (see fig. 4). We documented 74 instances where information we provided to the Congress resulted in statutory or regulatory changes, 570 instances where federal agencies improved services to the public, and 553 instances where agencies improved core business processes or governmentwide reforms were advanced (see fig. 5). These actions spanned the full spectrum of national issues, from ensuring the safety of commercial airline passengers to identifying abusive tax shelters. See figure 6 for examples of other benefits we claimed as accomplishments in fiscal year 2004. At the end of fiscal year 2004, 83 percent of the recommendations we made in fiscal year 2000 had been implemented (see fig. 7), primarily by executive branch agencies. Putting these recommendations into practice is generating tangible benefits for the American people. As figure 8 indicates, agencies need time to act on our recommendations. Therefore, we assess recommendations implemented after 4 years, the point at which experience has shown that, if a recommendation has not been implemented, it is not likely to be. During fiscal year 2004, experts from our staff testified at 217 congressional hearings (see fig. 9) covering a wide range of complex issues. For example, our senior executives testified on the financial condition of the Pension Benefit Guaranty Corporation’s single-employer program, the effects of various proposals to reform Social Security’s benefit distributions, and enhancing federal accountability through inspectors general. Nearly half of our testimonies were related to high-risk areas and programs. See figure 10 for a summary of issues we testified on, by strategic goal, in fiscal year 2004. Issued to coincide with the start of each new Congress, our high-risk update lists government programs and functions in need of special attention or transformation to ensure that the federal government functions in the most economical, efficient, and effective manner possible. Our latest report, released in January 2005, presents the status of high-risk areas identified in 2003 and lists new high-risk areas warranting attention by the Congress and the administration. In January 2003, we identified 25 high-risk areas; in July 2003, a twenty- sixth high-risk area was added to the list (see table 1). Since then, progress has been made in all areas, although the nature and significance of progress varies by area. Federal departments and agencies, as well as the Congress, have shown a continuing commitment to addressing these high- risk challenges and have taken various steps to help correct several of their root causes. GAO has determined that sufficient progress has been made to remove the high-risk designation from the following three areas: student financial aid programs, FAA financial management, and Forest Service financial management. Also, four areas related to IRS have been consolidated into two areas. This year, we designated four new high-risk areas. The first new area is establishing appropriate and effective information-sharing mechanisms to improve homeland security. Federal policy creates specific requirements for information-sharing efforts, including the development of processes and procedures for collaboration between federal, state, and local governments and the private sector. This area has received increased attention, but the federal government still faces formidable challenges sharing information among stakeholders in an appropriate and timely manner to minimize risk. The second and third new high-risk areas are, respectively, DOD’s approach to business transformation and its personnel security clearance program. GAO has reported on inefficiencies and inadequate transparency and accountability across DOD’s major business areas, resulting in billions of dollars of wasted resources. Senior leaders have shown commitment to business transformation through individual initiatives in acquisition reform, business modernization, and financial management, among others, but little tangible evidence of actual improvement has been seen to date in DOD’s business operations. DOD needs to take stronger steps to achieve and sustain business reform on a departmentwide basis. Further, delays by DOD in completing background investigations and adjudications can affect the entire government because DOD performs this function for hundreds of thousands of industry personnel from 22 federal agencies, as well as its own service members, federal civilian employees, and industry personnel. The Office of Personnel Management (OPM) is to assume DOD’s personnel security investigative function, but this change alone will not reduce the shortages of investigative personnel. The fourth high-risk area is management of interagency contracting. Interagency contracts can leverage the government’s buying power and provide a simplified and expedited method of procurement. But several factors can pose risks, including the rapid growth of dollars involved combined with the limited expertise of some agencies in using these contracts as well as recent problems related to their management. Various improvement efforts have been initiated to address interagency contracting, but improved policies and processes, and their effective implementation, are needed to ensure that interagency contracting achieves its full potential in the most effective and efficient manner. Lasting solutions to high-risk problems offer the potential to save billions of dollars, dramatically improve service to the American public, strengthen public confidence and trust in the performance and accountability of our national government, and ensure the ability of government to deliver on its promises. In fiscal year 2004, we issued 218 reports and delivered 96 testimonies related to our high-risk areas and programs, and our work involving these areas resulted in financial benefits totaling over $20 billion. This work, for example, included 13 reports and 10 testimonies examining problems with DOD’s financial management practices, such as weak internal controls over travel cards, inadequate management of payments to the Navy’s telecommunications vendors, and abuses of the federal tax system by DOD contractors, resulting in $2.7 billion in financial benefits. In addition, we documented $700 million in financial benefits based on previous work and produced 7 reports and 4 testimonies focusing on, for example, improving Social Security Administration and Department of Energy processes that result in inconsistent disability decisions and inconsistent benefit outcomes. Shortly after I was appointed in November 1998, I determined that GAO should undertake a major transformation effort to better enable it to “lead by example” and better support the Congress in the 21st century. This effort is consistent with the House Report 108-577 on the fiscal year 2005 legislative branch appropriation that focuses on improving the efficiency and effectiveness of operations at legislative branch agencies. H.Rpt. 108-577 directed GAO to work closely with the head of each legislative branch agency to identify opportunities for streamlining, cross- servicing and outsourcing, leveraging existing technology, and applying management principles identified as “best practices” in comparable public and private sector enterprises. H.R. 108-577 also directed the legislative branch agencies to be prepared to discuss recommended changes during the fiscal year 2006 appropriations hearing cycle. Our agency transformation effort has enabled GAO to become more results-oriented, partnerial, client-focused, and externally aware, and less hierarchical, process-oriented, “siloed,” and internally focused. The transformation resulted in reduced organizational layers, fewer field offices, the elimination of duplication in several areas, and improved our overall resource allocation. We began our transformation effort by using the GAO Strategic Plan as a framework to align our organization and its resources. On the basis of the strategic plan, we streamlined and realigned the agency to eliminate a management layer, consolidated 35 issue areas into 13 teams, and reduced our field offices from 16 to 11. We also eliminated the position of Regional Manager—a Senior Executive Service level position—in the individual field offices and consolidated the remaining field offices into three regions—the eastern region, the central region, and the western region, each headed by a single senior executive. Following the realignment of our mission organization and field offices, GAO’s support organizations were restructured and centralized to eliminate duplication and to provide human capital, report production and processing, information systems desk-side support, budget and financial management, and other services more efficiently to agency staff. This has resulted in a 14 percent reduction in our support staff since 1998. As shown in figure 11, these and subsequent measures improved the “shape” of the agency by decreasing the number of mid-level managers and by increasing the number of entry level and other staff with the skills and abilities to accomplish our work. During my tenure, GAO has outsourced and cross-serviced many administrative support activities, which has allowed GAO to devote more of its resources to mission work. In fiscal year 2004, about two-thirds of our nonhuman capital costs were spent to obtain critical mission support services for about 165 activities from the private and public sectors through outsourcing. Outsourcing contracts include a wide range of mission support activities, including information technology systems development, maintenance, and support; printing and dissemination of GAO products; operation and maintenance of the GAO Headquarters building; information, personnel, and industrial security activities; records management; operational support; and audit service support. GAO also meets many of its requirements through cross-servicing arrangements with other federal agencies. For example, GAO uses the Department of Agriculture’s National Finance Center to process its personnel/payroll transactions. Also, GAO uses the legislative branch’s long-distance telephone contract, which has resulted in continual reductions in long- distance rates. GAO also uses a wide range of contracting arrangements available in the executive branch for procuring major information technology (IT) services. GAO also uses the Library of Congress’ Federal Library and Information Network to procure all of its commercial online databases. Currently, as shown in figure 12, over 50 percent of our staff resources in the support area are contractors, allowing us to devote more of our staff resources to our mission work. We recently surveyed managers of agency mission support operations and identified additional activities that potentially could be filled through alternative sourcing strategies. In fiscal years 2005 and 2006, we will assess the feasibility of alternative sourcing for these activities using an acquisition sourcing maturity model and cost- benefit analyses. Utilizing IT effectively is critical to our productivity, success, and viability. We have applied IT best management practices to take advantage of a wide range of available technologies such as Web-based applications and Web-enabled information access, as well as modern, mobile computing devices such as notebook computers to facilitate our ability to carry out our work for the Congress more effectively. We make wide use of third- party reviews of our practices and have scored well in measurement efforts such as total cost of ownership, customer service, and application development. In fiscal year 2002, an independent study of GAO’s IT processes and related costs revealed that, “GAO is delivering superb IT application support and development services to the business units at 29 percent less than the cost it would take the Government peer group to deliver.” In confirmation of these findings, in fiscal year 2003, GAO was one of only three federal agencies to receive the CIO Magazine 100 Award for excellence in effectively managing IT resources to obtain the most value for every IT dollar. We were named to the CIO Magazine’s “CIO 100” for our excellence in managing IT resources in both 2003 and 2004. Because one of our strategic goals is to maximize our value by serving as a model agency for the federal government, we adopt best practices that we have suggested for other agencies, and we hold ourselves to the spirit of many laws that are applicable only to the executive branch. For example, we adhere to the best practices for results-oriented management outlined in the Government Performance and Results Act (GPRA). We have strengthened our financial management by centralizing authority in a Chief Financial Officer with functional responsibilities for financial management, long-range planning, accountability reporting, and the preparation of audited financial statements, as directed in the Chief Financial Officers Act (CFO Act). Also, for the eighteenth consecutive year, independent auditors gave GAO’s financial statements an unqualified opinion with no material weaknesses and no major compliance problems. In the human capital area, we are clearly leading by example in modernizing our policies and procedures. For example, we have adopted a range of strategic workforce policies and practices as a result of a comprehensive workforce planning effort. Among other things, this effort has resulted in greatly upgrading our workforce capacity in both IT and health care policy. We also have updated our performance management and compensation systems and our training to maximize staff effectiveness and to fully develop the potential of our staff within both current and expected resource levels. We are requesting budget authority of $493.5 million for fiscal year 2006. This budget request will allow us to continue to maximize productivity, operate more effectively and efficiently, and maintain the progress we have made in technology and other areas. However, it does not allow us sufficient funding to support a staffing level of 3,269—the staffing level that we requested in previous years. In preparing this request, we conducted a baseline review of our operating requirements and reduced them as much as we felt would be prudent. However, with about 80 percent of our budget composed of human capital costs, we needed to constrain hiring to keep our fiscal year 2006 budget request modest. We plan to use recently enacted human capital flexibility from the GAO Human Capital Reform Act of 2004 as a framework to consider such cost savings options as conducting one or more voluntary early retirement programs and we also plan to review our total compensation policies and approaches. There are increasingly greater demands on GAO’s resources. Since fiscal year 2000, we have experienced a 30 percent increase in the number of bid protest filings. We expect this workload to increase over the coming months because of a recent change in the law that expands the number of parties who are eligible to file protests. In addition, the number of congressional mandates for GAO studies, such as our reviews of executive branch and legislative branch operations, has increased more than 15 percent since fiscal year 2000. While we have reduced our planned staffing level for fiscal years 2005 and 2006, we believe that the staffing level we requested in previous years is a more optimal staffing level for GAO and would allow us to successfully meet the future needs of the Congress and provide the return on investment that the Congress and the American people expect. We will be seeking your commitment and support to provide the funding needed to rebuild our staffing levels over the next few fiscal years, especially as we approach a point where we may be able to express an opinion on the federal government’s consolidated financial statements. Given current and projected deficits and the demands associated with managing a growing national debt, as well as challenges facing the Congress to restructure federal programs, reevaluate the role of government, and ensure accountability of federal agencies, a strong GAO will result in substantially greater benefits to the Congress and the American people. Table 2 summarizes the changes we are requesting in our fiscal year 2006 budget. Our budget request supports three broad program areas: Human Capital, Mission Operations, and Mission Support. In our Human Capital program, to ensure our ability to attract, retain, and reward high-quality staff and compete with other employers, we provide competitive salaries and benefits, student loan repayments, and transit subsidy benefits. We have undertaken reviews of our classification and compensation systems to consider ways to make them more market-based and performance-oriented and to take into consideration market data for comparable positions in organizations with which we compete for talent. Our rewards and recognition program recognizes significant contributions by GAO staff to the agency’s accomplishments. As a knowledge-based, world-class, professional services organization in an environment of increasingly complex work and accelerating change, we maintain a strong commitment to staff training and development. We promote a workforce that continually improves its skills and knowledge. We plan to allocate funds to our Mission Operations program to conduct travel and contract for expert advice and assistance. Travel is critical to accomplishing our mission. Our work covers a wide range of subjects of congressional interest, plays a key role in congressional decision making, and can have profound implications and ramifications for national policy decisions. Our analyses and recommendations are based on original research, rather than reliance on third-party source materials. In addition, GAO is subject to professional standards and core values that uniquely position the agency to support the Congress in discharging its oversight and other responsibilities under the Constitution. We use contracts to obtain expert advice and or assistance not readily available within GAO, or when expertise is needed within compressed time frames for a particular project, audit, or engagement. Examples of contract services include obtaining consultant services, conducting broad- based studies in support of audit efforts, gathering key data on specific areas of audit interest, and obtaining technical assistance and expertise in highly specialized areas. Mission Support programs provide the critical infrastructure we need to conduct our work. Mission support activities include the following programs: Information Technology: Our IT plan provides a road map for ensuring that IT activities are fully aligned with and enable achievement of our strategic and business goals. The plan focuses on improved client service, IT reliability, and security; it promotes effectiveness, efficiency and cost benefit concepts. In fiscal years 2005 and 2006, we plan to continue to modernize outdated management information systems to eliminate redundant tasks, automate repetitive tasks, and increase staff productivity. We also will continue to modernize or develop systems focusing on how analysts do their work. For example, we enhanced the Weapons Systems Database that we created to provide the Congress information to support budget deliberations. Building Management: The Building Management program provides operating funds for the GAO Headquarters building and field office locations, safety and security programs, and asset management. We periodically assess building management components to ensure program economy, efficiency and effectiveness. We are currently 8 percent below the General Services Administration’s (GSA) median costs for facilities management. We continue to look for cost-reducing efficiencies in our utility usage. Our electrical costs are currently 25 percent below GSA’s median cost. With the pending completion of our perimeter security enhancements and an automated agency wide access control system, all major security enhancements will have been completed. Knowledge Services: As a knowledge-based organization, it is essential for GAO to gather, analyze, disseminate, and archive information. Our Knowledge Services program provides the information assets and services needed to support these efforts. In recent years, we have expanded our use of electronic media for publications and dissemination; enhanced our external Web site, resulting in increased public access to GAO products; and closed our internal print plant and increased the use of external contractors to print GAO products, increasing the efficiency and cost- effectiveness of our printing operation. Due to recent budget constraints, we have curtailed some efforts related to archiving paper records. We currently are implementing an electronic records management system that will facilitate knowledge transfer, as well as document retrieval and archival requirements. Human Capital Operations: In addition, funds will be allocated to Human Capital Operations and support services to cover outplacement assistance, employee health and counseling, position management and classification, administrative support, and transcription and translation services. We appreciate your consideration of our budget request for fiscal year 2006 to support the Congress. GAO is uniquely positioned to help provide the Congress the timely, objective information it needs to discharge its constitutional responsibilities, especially in connection with oversight matters. GAO’s work covers virtually every area in which the federal government is or may become involved anywhere in the world. In the years ahead, GAO’s support will prove even more critical because of the pressures created by our nation’s large and growing long-term fiscal imbalance. This concludes my statement. I would be pleased to answer any questions the Members of the Subcommittee may have. This section contains credit and copyright information for images and graphics in this product, as appropriate, when that information was not listed adjacent to the image or graphic. Page 6: PhotoDisc (money); Eyewire (monitor and medical symbol). Page 9: BrandXPictures (flag); PhotoDisc (calculator and Social Security card). Page 11: BrandXPictures (flag); Digital Vision (teacher); Dynamic Graphics (health care). Page 12: BrandXPictures (flag); Dynamic Graphics (1040 Form); DOD (soldiers). This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
This testimony is in support of the fiscal year 2006 budget request for the U.S. Government Accountability Office (GAO). This request is necessary to help us continue to support the Congress in meeting its constitutional responsibilities and to help improve the performance and ensure the accountability of the federal government for the benefit of the American people. We are grateful to the Congress for providing us with the support and resources that have helped us in our quest to be a world-class professional services organization. We believe that investing in GAO produces a sound return and results in substantial benefits to the Congress and the American people. In the years ahead, our support to the Congress will likely prove even more critical because of the pressures created by our nation's current and projected budget deficit and long-term fiscal imbalance. These fiscal pressures will require the Congress to make tough choices regarding what the government should do, how it will do its work, who will help carry out its work in the future, and how government will be financed in the future. We summarized the larger challenges facing the federal government in our recently issued 21st Century Challenges report. In this report, we emphasize the critical need to bring the federal government's programs and policies into line with 21st century realities. Continuing on our current unsustainable fiscal path will gradually erode, if not suddenly damage, our economy, our standard of living, and ultimately our national security. We, therefore, must fundamentally reexamine major spending and tax policies and priorities in an effort to recapture our fiscal flexibility and ensure that our programs and priorities respond to emerging security, social, economic, and environmental changes and challenges in the years ahead. This testimony focuses on GAO's (1) performance and results with the funding Congress provided in fiscal year 2004, (2) streamlining and management improvement efforts under way, and (3) budget request for fiscal year 2006 to support the Congress and serve the American people. In summary, the funding we received in fiscal year 2004 allowed us to audit and evaluate a number of major topics of concern to the nation and, in some cases, the world. We also continued to raise concerns about the nation's long-term fiscal imbalance, summarized key health care statistics and published a proposed framework for related reforms, and provided staff support for the 9/11 Commission. In fiscal year 2004, we exceeded or equaled our all-time record for six of our seven key performance indicators while continuing to improve our client and employee feedback results. We documented $44 billion in financial benefits--a return of $95 for every dollar spent, or $13.7 million per employee. In fiscal year 2004, we also recorded 1,197 other benefits that could not be measured in dollar terms including benefits that helped to change laws, to improve services to the public and to promote sound agency and governmentwide management. Also, experts from our staff testified at 217 congressional hearings covering a wide range of important public policy issues during fiscal year 2004. Shortly after the Comptroller General was appointed, he determined that our agency would undertake a transformation effort. This effort is consistent with the elements of House Report (H.Rpt.) 108-577 that focus on improving the efficiency and effectiveness of operations at legislative branch agencies. Our transformation effort has enabled us to eliminate a management layer, streamline our organization, reduce our overall footprint, and centralize many of our support functions. Currently, over 50 percent of our support staff are contractors, allowing us to devote more of our staff resources to our mission work. We recently surveyed managers of agency support operations and identified additional activities that potentially could be filled through alternative sourcing strategies. In fiscal years 2005 and 2006, we will further assess the feasibility of using alternative sourcing for these activities. In developing our fiscal year 2006 budget, we have taken into consideration the overall federal budget constraints and the committee's desire to lead by example. Accordingly, we are requesting $493.5 million which represents a modest increase of 4 percent over fiscal year 2005. This increase is primarily for mandatory pay costs and price level changes. This budget request will allow us to continue to maximize productivity, operate more effectively and efficiently, and maintain the progress we have made in technology and other areas, but it does not allow us sufficient funding to support a staffing level of 3,269--the staffing level that we requested in previous years. Even as we are tempering our budget request, it needs to be acknowledged that there are increasing demands on GAO's resources. For example, the number of congressional mandates for GAO studies, such as GAO reviews of executive branch and legislative branch operations, has increased more than 15 percent since fiscal year 2000. While we have reduced our planned staffing level for fiscal years 2005 and 2006 in order to keep our request modest, we believe that the staffing level we requested in previous years is a more optimal staffing level for GAO and would allow us to better meet the needs of the Congress and provide the return on investment that both the Congress and the American people expect. We will be seeking Congressional commitment and support to provide the funding needed to rebuild our staffing levels over the next few fiscal years, especially as we approach a point where we may be able to express an opinion on the federal government's consolidated financial statements.
Patriot is a mobile Army surface-to-air missile system designed to counter tactical ballistic missiles; cruise missiles; and other threats such as airplanes, helicopters, and unmanned aerial vehicles. Patriot was first deployed in the early 1980s and since that time has received a number of substantial updates to keep pace with the growing threat. Patriot is deployed worldwide in defense of the United States and its allies’ key national interests, ground forces, and critical assets. A Patriot fire unit is made up of four basic components: (1) a ground- based radar to detect and track targets; (2) launchers; (3) interceptor missiles; and (4) a command, control, and communication station. Patriot fire units are organized to fight in groups known as battalions. Each battalion is controlled by its own command and control station and can manage up to six fire units, although a battalion is typically deployed with four. For a notional configuration of a Patriot battalion, see figure 1. Several battalions can be commanded by an Army brigade. Brigades are also responsible for certifying that the equipment can be employed as required and for training the battalions. The brigade manages battalion personnel under its command, with the ability to transfer personnel among battalions to fill personnel gaps as needed. The air and missile defense architecture consists of several systems deployed together to provide a layered defense against various threats in a range of battlespaces. Other air and missile defense systems can contain, like Patriot, a sensor, a launcher, and a system-centric command and control station. These systems’ command and control stations can share information with other air and missile defense systems or with other joint systems through external communication links, as seen in figure 1. The air and missile defense architecture includes systems designed to counter threats at a low altitude—such as rockets, artillery, and mortar— as well as systems designed to defeat high-altitude threats intercepted above the earth’s atmosphere. Patriot serves as the Army’s primary element deployed to intercept targets in this middle range of battlespace—above the range of rockets, artillery, and mortar, but within the earth’s atmosphere. The Army has identified a number of air and missile defense communication and performance capability gaps in its ability to address evolving global threats. Over the last decade, adversaries have acquired more robust, diverse, and complex threats. According to a 2010 Ballistic Missile Defense Review Report, ballistic missiles are more technically sophisticated, more proliferated, include more advanced countermeasures, and continue to challenge U.S. ballistic missile defense system capabilities. Cruise missiles have also become relatively simple to develop, are cheaper than ballistic missiles or aircraft, and are easy to export. Additionally, advanced electronic attacks, such as jamming or spoofing, have become more widespread and easier to effectively produce. Sophisticated enemies also have the ability to use a combination of integrated attacks including electronic and cyber warfare, a variety of inbound ballistic and cruise missiles, special operation forces, and other methods to complicate the battlespace. The Army has identified some high-priority air and missile defense gaps in its ability to respond to the growing threats, as seen in table 1. The Army announced an Air and Missile Defense Strategy in 2012 to address communication and performance capability gaps by integrating its current air and missile defense system components (e.g. sensors and launchers), including Patriot, under a central network and command and control system and linking them with joint and potential coalition allies. The Integrated Air and Missile Defense (IAMD) program is currently developing the IAMD Battle Command System (IBCS) that plans to connect Patriot radars and launchers into IBCS’s central network and command and control stations. By connecting these components directly with IBCS, the Army intends to divest air and missile defense systems of their system-specific command and control stations and allow them to become network enabled sensors and launchers. See figure 2 below for a notional representation of the future Integrated Air and Missile Defense architecture. The Army intends for the integrated air and missile defense architecture to address communication and performance capability gaps by allowing IBCS to collect information from a variety of sensors, fuse that data into a single battlespace picture, and use that information to engage targets. Receiving sensor data from a range of sensors could enable longer- distance engagements and provide commanders with more decision time to select the appropriate response, prevent fratricide, and allow any joint sensor to pair with the best available launcher. In addition, by integrating several individual sensors’ data, IBCS could compare and resolve conflicts within the individual systems’ abilities to accurately classify, identify, and discriminate potential threat objects to provide more accurate data back to the systems. IBCS could also help mitigate the risk of electronic attack since additional sensor data could help confirm where targets are when individual radars are being jammed or spoofed. In addition, because launchers would have access to additional sensor data, they could see more of the battlespace and use that information to more effectively engage threats. IBCS is intended to multiply the performance capabilities of the individual sensors and launchers connected to its network. Therefore, the capability of the networked architecture relies upon the ability of Patriot, as well as other air and missile defense systems, to connect with IBCS and provide the needed quality data for enhanced performance capabilities. Similar endeavors to create a system-of-systems architecture with an extensive communication and information network have proved challenging for DOD in the past. For example, prior work on the Army’s Future Combat Systems, a multibillion dollar development program originally consisting of 18 manned and unmanned systems tied together by an extensive communications and information network, faced rising costs and technical challenges that eventually led to its cancellation. In 2014, DOD provided guidance to the Army for conducting its LTAMD analysis of alternatives (AOA) to explore options for an efficient and cost- effective long-term radar and launcher solution—with considered alternatives ranging from the current Patriot assets with modifications up to total replacements—that will be able to connect with IBCS and address capability needs related to radar reliability, range, and 360-degree surveillance. The AOA results will support a decision for a new radar acquisition program, known as the LTAMD sensor, that will require a significant long-term financial investment. Issues with the Patriot radar have been raised in the past. For example, the Director of Operational Test and Evaluation identified performance and reliability issues with the current Patriot radar in its annual report since 2013. In addition, the Army conducted a business case analysis in 2013 and found that upgrades to the Patriot radar could result in operations and support savings, performance improvements, and reliability enhancements. An AOA is a key first step in the acquisition process, intended to assess alternative solutions for addressing a validated need. AOAs are generally performed or updated to support key acquisition decision points. During the course of our audit, an official in the Office of the Secretary of Defense for Cost Assessment and Program Evaluation (CAPE) stated that he expected the final LTAMD AOA report to receive approval in the third quarter of fiscal year 2016. As of August 2016, the report was still under independent review with the CAPE. To prepare the warfighter for the transition from the current, or legacy, Patriot system to IBCS-integrated Patriot radars and launchers, the Patriot program identified a need for training upgrades. Upgraded training aids and devices are necessary because transitioning to IBCS changes the way the warfighter employs the Patriot equipment. The Patriot program has also identified a need to continue substantial investments to address obsolescence and sustainment issues. For example, the process of upgrading all of the legacy Patriot battalions to IBCS-integrated radars and launchers is an 8-year process that officials expect to begin in fiscal year 2017 and complete in fiscal year 2025. The legacy Patriot system components need ongoing obsolescence and sustainment improvements to improve reliability and availability, remain affordable, and be compatible with the different versions of operational Patriot battalions during that time. In addition, the program intends to continue obsolescence and sustainment investments to maintain readiness, improve reliability, and lower sustainment costs to support deployed forces with legacy radars until the legacy radar is fully replaced. Officials estimate that a new radar development could begin fielding in the fiscal year 2028 time frame with tactical fielding completing within 7 years. However, these plans are still preliminary and the milestone approval process is still underway. Lastly, obsolescence and sustainment improvements support legacy versions of Patriot systems, which foreign military partners continue to buy and operate. Patriots have been sold world-wide to 12 foreign military partners who share costs for sustainment and capability improvements in addition to investing in development to mitigate system obsolescence. The currently fielded version of Patriot represents an improvement over prior versions through upgraded software, a more capable missile, and increased processor capabilities. However, the current version demonstrated a number of performance shortfalls against its documented requirements. In addition, warfighters from various combatant commands have expressed critical needs for additional performance capabilities and training equipment for the Patriot system that are currently unmet. The current version of the Patriot system added performance capabilities through a software and processor upgrade in 2013 and an upgraded missile and launcher that began fielding in fiscal year 2016. In 2013, the Patriot program released its current system software upgrade known as Post Deployment Build-7 (PDB-7) that provided improvements in threat tracking, debris mitigation, and user interface. The software is supported by a new modern processor in the command and control station. This new processor provides Patriot with the ability to process more complex algorithms that improve the system’s capabilities against advanced threats. It also provides a platform for future capability improvements. Lastly, a launcher upgrade allows the system to launch and support use of the new Patriot Advanced Capability-3 (PAC-3) Missile Segment Enhancement (MSE) missile. The PAC-3 MSE, budgeted for and managed under a separate acquisition program, was fielded in the first quarter of fiscal year 2016 and is an upgrade to the predecessor PAC-3 missile by providing better lethality and a longer range—flying approximately 50 percent higher in altitude and 100 percent farther downrange. While the system has made improvements, operational testing revealed that the system requires significant upgrades to the radar and software to bring the system up to the level of capabilities required. Operational testing is a field test of a system or item under realistic operational conditions with users who represent those expected to operate and maintain the system when it is fielded or deployed. The Army conducted a type of operational test called a limited user test in 2012 to evaluate the Patriot system with PDB-7 software, the modern command and control processor, and the PAC-3 MSE with the launcher upgrade against requirements defined in the program’s capability development and production documents. The Director of Operational Test and Evaluation’s (DOT&E) report on the results of the limited user test is classified, but it generally found that Patriot’s performance improved against some threats compared to prior versions but had degradations in system effectiveness against other threats. An unclassified summary of Patriot performance shortfalls, as identified by DOT&E and the Army, is shown in table 2. Some of the performance shortfalls can be attributed to the radar’s limited sensing abilities. While the PAC-3 MSE missile has an expanded battlespace over the PAC-3 missile, the radar is not able to sense and support the full range and capabilities of PAC-3 MSE. In addition, since experiencing fratricides during Operation Iraqi Freedom in 2003, the program has been working on upgrades to the system’s ability to more accurately classify, identify, and discriminate threat objects. While significant enhancements have been made since that time, the program requires additional capabilities to meet requirements. The risk of these performance shortfalls, left unaddressed, range from erroneous engagements and missile wastage to mission failure or fratricide. In addition, DOT&E’s limited user test report found that the Patriot system as a whole did not meet the reliability requirement, but would have if the Patriot radar had achieved its reliability goal. The metric for determining reliability is an average of the number of hours between critical failures that place the system out of service and into a state of repair. Although the system is required to run at least 20 hours on average between critical failures, during the limited user test, the Patriot fire unit fell short by demonstrating an average time of around 11 hours. More than 70 percent of the critical mission failures during the test were experienced by the radar. Had the radar achieved its requirement of at least 38 hours, the fire unit would have exceeded the 20 hour requirement. Army officials attribute the radar reliability problems to a number of parts including obsolete technology, which require high levels of maintenance. Too frequent critical failures can create vulnerabilities for the system and defended assets when the equipment is taken offline for maintenance actions. The warfighter has identified several capability needs for the Patriot system that are currently unmet. One of the ways that warfighters in various combatant commands express their capability needs is through memos known as operational needs statements. The warfighter has identified an operational need for capabilities to address many of the same air and missile defense capability gaps for performance and communications previously identified in table 1. While the shift to Army’s IBCS, planned for initial fielding in fiscal year 2018, is designed to address the capability need for joint integration below the battalion, the warfighter has requested this new capability be fielded sooner. Warfighters have also identified a need for reconfigurable training assets and simulations for training in a variety of settings to operate and maintain the system. See table 3 for current operational needs statements. To address a diverse set of capability needs to mitigate evolving threats, the Army is planning to field a number of upgrades, as well as a long-term radar solution, projected to cost $2.9 billion through fiscal year 2021 with additional costs needed for its long-term solutions. The program successfully completed developmental testing on near and mid-term upgrades in 2016. However, two operational test campaigns, consisting of multiple ground and flight tests, currently planned to begin in late fiscal year 2016 and 2019 should demonstrate how well the near and mid-term upgrades work as intended and identify any performance shortfalls that may require additional development. The Army is fielding a number of upgrades in order to address divergent needs identified by the Army, the program office, independent test officials, and warfighters as discussed previously and summarized below in table 4. The Army has budgeted $2.9 billion in three budget lines for development and procurement between fiscal years 2013 and 2021 for various upgrades and a long-term radar solution. Specifically, the Army is budgeting for three ongoing upgrades to address obsolescence issues, four near-term hardware upgrades that begin fielding prior to fiscal year 2017, six mid-term upgrades and supporting equipment that will begin fielding between fiscal years 2017 and 2021, and long-term upgrades— including a long-term radar solution, the details for which are still being determined. Costs are expected to continue beyond fiscal year 2021 to finish purchasing the necessary number of modifications already in production as well as to develop and procure long-term solutions required to address some of the capability needs. See Figure 3 for more details on how costs are allocated among the obsolescence, near-term, mid-term, and long-term upgrades. Additional details on the upgrades including planned cost and schedule are included below. The Army has spent nearly $306.3 million since fiscal year 2013 and plans to spend an additional $361.5 million through fiscal year 2021 for various obsolescence upgrades that have been ongoing in the program for years and are planned to continue. These upgrades improve readiness and reduce future operation and sustainment costs for Patriot components. Additional details on these upgrades and the Patriot capability needs they plan to address are included in table 5. Requests for funding for these three ongoing upgrades to address obsolescence issues are expected to continue beyond fiscal year 2021. See figure 4 for planned costs between fiscal years 2013 and 2021. The Army has spent nearly $273.9 million since fiscal year 2013 and plans to spend an additional $553.7 million through fiscal year 2021 for near-term upgrades that begin fielding prior to fiscal year 2017 to address critical communication needs, ensure legacy components are sustainable, and address warfighter needs for system capability and training. For details on the near-term upgrades and the Patriot capability needs they plan to address, see table 6. The fielding schedule for Patriot near-term upgrades is included in figure 5 along with the total planned costs from fiscal years 2013-2021. However, the program will need to request additional funds beyond fiscal year 2021 to complete the purchase of launcher upgrades. Fielding for some of the training software and hardware devices began prior to fiscal year 2013. The Army has spent nearly $553.1 million since fiscal year 2013 and plans to spend an additional $437.3 million for mid-term upgrades and supporting test equipment that begin fielding between fiscal years 2017 and 2021. Among the mid-term upgrades is the remaining hardware needed—a radar digital processor—to prepare the system for integration with IBCS. Also key among these upgrades is a major software upgrade called Post Deployment Build-8 (PDB-8), which, in addition to a second software upgrade called PDB-8.1, is intended to improve communications and system capabilities against threats. Together, these mid-term upgrades, along with a test detachment, are intended to improve system performance, address warfighter needs, reduce obsolescence, and support Patriot testing needs. For details on the near-term upgrades and test detachment and the Patriot capability needs they plan to address, see table 7. The fielding schedule and total planned costs for Patriot mid-term upgrades between fiscal years 2013 and 2021 are included in figure 6. Costs for PDB-8 and PDB-8.1 software-related tasks are estimated based on software-related tasks in the budget. Congress recommended reductions in requested development funding for software-related efforts by 50 percent or more each year between fiscal year 2013 and 2015, amounting to nearly $200 million in reductions. According to program officials, these reductions caused the program to delay some planned capabilities from PDB-8 until PDB-8.1. Officials explained that software capabilities currently planned for PDB 8.1 could be affected by available funding in any given year and may lead to deferring capability into future software upgrades. The program has already planned to continue software capability costs beyond fiscal year 2019 for future software improvements in the missile, launcher, or radar components following PDB-8.1. Additional details on the status of the development and procurement of Patriot’s near and mid-term upgrades is included in appendix III. The Army has spent around $8.5 million since fiscal year 2013 and plans to spend an additional $437.8 million between fiscal years 2017 and 2021 for long-term software and radar solutions to continue to address capability needs. Of the planned $437.8 million, the program has initially budgeted around $74 million in fiscal years 2020 and 2021 for future software improvements in the missile, launcher, or radar components beyond PDB-8.1, with plans to continue software investments beyond 2021. The remaining $364.1 million is planned through fiscal year 2021 as a portion of total expected costs for a long-term radar solution. These costs are part of a program funding line established in the 2017 president’s budget that the Army plans to manage as a new major defense acquisition program, known as the LTAMD sensor, beginning in fiscal year 2016. This long-term LTAMD sensor solution will be selected based on the findings in the ongoing LTAMD AOA that is being conducted as a result of concerns over the current Patriot radar’s high obsolescence and sustainment costs as well as issues with performance and reliability. For additional information on the AOA, see appendix II. There are many radar options being considered in the AOA, from the current Patriot radar with some modifications all the way up to a brand new radar development. Officials estimate that fielding for the selected radar solution could begin in the fiscal year 2028 time frame, with tactical fielding to be completed within 7 years. Depending on the Army’s selected radar solution, costs could increase and continue well beyond fiscal year 2021 for additional development as well as for procurement costs, which have not yet been determined. A breakdown of total planned costs from fiscal years 2013 to 2021 for long-term upgrades as well as a long-term radar solution is included in figure 7. The Patriot program successfully completed developmental testing on the system configured with near and mid-term upgrades in addition to completing some limited developmental testing on the current PDB-7 version integrated with IBCS. Test and evaluation activities are an integral part of developing and producing weapon systems, as they provide knowledge of a system’s capabilities and limitations as it matures and is eventually delivered for use by the warfighter. Developmental testing, which is conducted by contractors, university and government labs, and various DOD organizations, is intended to provide feedback on the progress of a system’s design process and its combat capability as it advances toward initial production or deployment. The Patriot program successfully completed developmental testing in fiscal year 2016 for the system configured with near and mid-term hardware upgrades. The Army Test and Evaluation Center conducted system-level developmental testing for Patriot configured with PDB-8 software in addition to other hardware upgrades, including modernized displays in the command and control stations, the PAC-3 MSE with the supporting launcher upgrades, and the radar digital processor. As part of this test, the program successfully conducted four flight tests. These flight tests demonstrated the system’s ability to intercept targets using a variety of Patriot missiles, including the PAC-3 MSE. The Army Test and Evaluation Command also performed testing on individual hardware upgrades with favorable results. For example, the command conducted some limited testing on the program’s new communication terminals and found that the upgrades generally work as intended. However, additional testing to evaluate the full functionality of the terminals is required prior to full material release. The IAMD program conducted two developmental flight intercept tests in 2015 of the PDB-7 version of Patriot integrated with IBCS, which also met main objectives. During one of these tests, IBCS was able to command a Patriot launcher to launch a missile and destroy a target using tracking data from another Army system radar. The program currently has two operational tests planned through 2020 that will test the system configured with upgraded software PDB-8 and PDB-8.1 as well as with assorted near-term and mid-term hardware upgrades as seen in table 8. Operational test and evaluation is intended to evaluate a system’s effectiveness and suitability under realistic combat conditions before full-rate production or deployment occurs. Operational testing for PDB-8 is planned to begin in the fourth quarter of fiscal year 2016 and complete in the fourth quarter of fiscal year 2017. Operational testing for PDB-8.1 is planned to begin in the fourth quarter of fiscal year 2019 and complete in the third quarter of fiscal year 2020. While developmental testing thus far has been successful, the results of operational test and evaluation will reveal the extent to which many of the upgrades work as intended to address some of Patriot’s diverse capability needs. For example, operational testing for PDB-8 will evaluate how well the software and hardware upgrades address the previously identified performance shortfalls from PDB-7—including issues with the radar’s reliability. In addition, the test will also evaluate the effectiveness and efficiency of training aids and devices that are being procured to address warfighter needs. Operational testing for PDB-8.1 is planned to evaluate how well PDB-8.1 software capability upgrades effectively address remaining system performance needs. According to Army Test and Evaluation Command officials, upgrades that have not yet begun production, like the global positioning anti-jamming hardware upgrade and the radar anti-jamming upgrade, have not yet been incorporated into testing plans. However, near- and mid-term upgrades aren’t expected to fully address all of the Patriot capability needs, which will require long-term upgrade solutions. For example, the program plans for its near and mid-term upgrades to provide significant enhancements to radar reliability and sensing range to support the PAC-3 MSE missile’s mission against stressing threats, but does not expect them to fully address the performance needs without the long-term radar solution. In addition, currently planned software upgrades are intended to provide capabilities to help address tactical ballistic missile threats and electronic attacks, but additional long-term software—and potential additional hardware— investments are needed to continue improving capabilities against the evolving threat, which continues to create new gaps in the system’s capabilities. Operational testing results could identify unexpected performance shortfalls in the near and mid-term upgrades that require additional development. In the case of PDB-7, for example, operational test results identified unexpected performance shortfalls in system reliability that required additional development in the latest near and mid-term upgrades to address. Operational testing for PDB-8 or PDB-8.1 could also identify unexpected performance shortfalls that require additional development to insert capabilities into future software or hardware upgrades for Patriot components. Oversight of Patriot upgrades has been limited because of how the Army chose to define and manage them, including not establishing oversight mechanisms similar to those generally applicable to major defense acquisition programs. The Army chose to incorporate the Patriot upgrade efforts into the existing Patriot program which made certain oversight mechanisms inapplicable. While it would not be productive for the program to go back and establish these mechanisms from development start, upcoming operational tests provide the Army with an opportunity to provide oversight and ensure accountability for the cost, schedule, and performance of near- and mid-term upgrades, tested along with PDB-8 and PDB-8.1, if further development is needed. Up to this point, the Patriot program has not put a mechanism in place to track or report progress against cost, schedule, or performance baselines of its upgrade efforts, similar to those generally required of multibillion dollar DOD acquisition programs. Under DOD instruction 5000.02 and related statutes, major defense acquisition programs (MDAPs) are subject to a number of oversight mechanisms that provide transparency into program plans and progress. Although the Army’s 2013 cost estimate for all the Patriot upgrades met the threshold to be considered a separate MDAP, the Army chose not to define the upgrade efforts as such. Instead, the upgrades were incorporated into the existing Patriot program, which resulted in the upgrade efforts not being separately subject to statutory and regulatory reporting requirements that generally apply to MDAPs. In addition, the program did not establish any oversight mechanisms for the upgrades that were similar to those generally required of MDAPs. For example, new MDAPs are generally required to establish an approved program baseline that includes initial estimates for key cost, schedule, and performance metrics at the beginning of system development, at the start of production, and before the start of full rate production. Information about these baselines is reported to Congress in a standardized format through Selected Acquisition Reports. On a periodic basis, programs update the information in these reports by comparing the latest cost, schedule, and performance estimates against the initial estimates and providing explanations for any major deviations. Establishing reliable cost and schedule estimates are best practices that we have found go hand-in-hand as fundamental management tools that can help all government programs use public funds effectively. Further, as we demonstrate each year in special annual reports assessing DOD’s acquisition of selected weapon programs, and in related testimonies before congressional committees, regular comparisons of program cost, schedule, and performance against initial estimates is an essential oversight tool. Such data, when maintained and reported on a regular basis, help the decisionmakers who oversee program progress understand the significance of any increases or decreases in cost or schedule as a program evolves, provide transparency, and give Congress and the Office of the Secretary of Defense a mechanism to hold the program accountable for its intended results. As we reported in our March 2016 assessment, programs that do not uniformly implement these and other best practices tend to realize significant cost growth and delays in delivering needed capabilities. Army officials explained that the existing Patriot program’s 2002 acquisition strategy provided approval for the Army to execute Patriot upgrades as part of this program, which was defined as an MDAP, and the Office of the Secretary of Defense had no objection. However, the requirement for MDAPs to continue reporting Selected Acquisition Reports ceases after 90 percent of the program’s items are delivered or 90 percent of planned expenditures under the program have been made. The Patriot program submitted its final Selected Acquisition Report in 2004 when the program was considered more than 90 percent complete. Absent the requirement to do so, the program has not provided decisionmakers with similar information. As a result, there has been no mechanism for DOD and congressional decisionmakers to monitor performance of the approximately $1 billion spent on Patriot upgrades since 2013 and to ensure that efforts have resulted in progress toward meeting the program’s goals. While it would not be productive for DOD to go back and track cost or schedule changes from the start of the Patriot upgrade efforts (see appendix III), in the event that upcoming operational tests reveal the need for further development of PDB-8 and PDB-8.1 and other near- and mid- term upgrades tested along with that software, the department will have an opportunity to provide increased oversight of those upgrades. As noted above, DOD already plans to define the long-term LTAMD sensor solution as a separate MDAP, which indicates the program would be subject to the oversight requirements applicable to MDAPs, such as those discussed above. Without estimated costs and schedule needed to complete the development of upgrades for essential Patriot capabilities, similar to those generally required of new major defense acquisition programs, DOD and congressional decisionmakers will lack an essential oversight tool. In addition, unless, at the same time, DOD provides Congress with an estimate of the amount of development costs it has incurred since 2013 for near- and mid-term Patriot upgrades operationally tested along with PDB-8 and PDB-8.1, Congress will not have a basis from which to understand the significance of any increases or decreases as the program evolves. Finally, without annual reporting mechanisms that enable comparisons between subsequent cost and schedule estimates and initial estimates, along with periodic explanations for any major cost or schedule deviations, Congress will lack critical information it needs to evaluate future program budget requests. The Army selected a plan to synchronize its fielding of upgraded versions of the Patriot system during its transition to the Integrated Air and Missile Defense Battle Command System (IBCS) that allows it to meet operational demands. Integrating Patriot battalions with IBCS can provide organizational and personnel flexibility in the future. However, the process of fielding these upgrades over the course of the 8-year transition to IBCS amplifies some of the challenges the Army is already facing with training complexity and maintenance schedules for the Patriot system. The Army is taking steps to mitigate these challenges. The Army has a plan for fielding modernized Patriots to Combatant Commands. The process of modernizing a Patriot battalion—transitioning it from its current PDB-7 software version into launchers and radars integrated with IBCS, involves two phases. The first phase requires the battalion to be upgraded to the PDB-8 software version. Once the battalion receives PDB-8, it is ready for phase 2, which consists of a second software update to integrate the system components with IBCS. In some cases, a battalion can undergo both modernization phases consecutively, but, in other cases, a battalion can complete phase 1 and then wait a number of years to complete phase 2. The fielding plan the Army selected completes phase 2 of integrating the battalion into IBCS at a rate of approximately two Patriot battalions per year. By fiscal year 2022 the Army plans to have completed phase 1 for all 15 battalions with 9 battalions completing phase 2 and being IBCS compatible. IBCS integration continues through fiscal year 2025, as seen in figure 8. To synchronize fielding with testing, the Army removed a Patriot battalion from the operational deployment rotation and assigned it solely to modernization testing. Army officials told us this is a key enabler of the fielding strategy—without it the plan becomes unworkable. Specifically, the amount of time required to begin and complete IBCS integration testing exceeds the amount of time that any one Patriot battalion is available to perform that testing. Therefore, the Army would have to start with one battalion and complete the testing with a second battalion— which would add an extra 6 to 9 months to train the second battalion on how to use the new equipment. After completing the United States / North Atlantic Treaty Organization mission in Turkey, the Army was able to adjust its Patriot unit rotation schedule which enabled the Army to assign a battalion to support Patriot modernization testing. The battalion’s test assignment began in April 2016 and the Army plans to keep the battalion solely for testing into fiscal year 2018. Army officials also told us that the Vice Chief of Staff for the Army recently approved increased funding for the Army Air and Missile Defense test detachment to increase the manning from 35 to over 140. Increasing the size of the detachment will allow the Patriot test battalion to rejoin the operational rotation in fiscal year 2018, providing the combatant commands with more available Patriot battalions. The Army considered four alternative plans for how and when to field these 2 phases of modernization to the 15 Patriot battalions. The baseline plan would have upgraded three battalions per year to PDB-8 and one per year to IBCS. Another plan would have upgraded two or three battalions per year to PDB-8 and two per year to IBCS, while focusing on upgrading units in Europe first. A third plan would have upgraded three battalions per year to PDB-8 and two to IBCS, and would have upgraded units in the Pacific first. A fourth alternative, which the Army selected, completes phase 1 of the upgrades for the nine Patriot battalions that are not being upgraded directly to IBCS compatibility by fiscal year 2022 and completes phase 2 of the upgrades to make all 15 Patriot battalions IBCS compatible by 2025. The Army prioritized meeting training requirements and operational demands when selecting its plan for completing Patriot modernization efforts. The Army used five criteria to evaluate the four alternative plans. The Army’s evaluation criteria included maximizing the number of Patriot battalions available at any given time to support operations, maintaining the same software version for all Patriot battalions under a particular brigade to make training consistent, and meeting these and other competing needs within funding constraints. Table 9 below provides a description of the criteria, the weighting the Army assigned to it, and how well the plan the Army selected optimized the criteria. Based on the Army’s analysis, the selected plan did the best job of balancing all of the key considerations reflected in these criteria. Army officials told us that moving Patriot to IBCS provides benefits in meeting Combatant Command operational needs more flexibly because the system can be reorganized so that it no longer has to be deployed as a complete battalion. IBCS-compatible Patriot components can be deployed as individual radars and launchers, networked through IBCS. Army officials told us that instead of having 15 Patriot battalions, the Army will have 60 fire units’ worth of radars and launchers that can be deployed more flexibly to meet combatant command operational demands. Transitioning Patriot to IBCS compatibility can potentially lead to organizational changes that reduce the number of personnel required to operate and maintain the radars and launchers. The Army plans to use this streamlined organizational structure as an opportunity to create a more even distribution of tasks. As part of its findings during the PDB-7 Limited User Test, DOT&E reported that Patriot personnel currently performing the job of operator/maintainers are required to perform many complex tasks, resulting in poor operator performance. Army officials told us they expect to realign the current number of personnel specialties within Patriot from nine specialties down to four. In addition, these specialties will no longer be Patriot specific—rather they will cut across the integrated air and missile defense community, allowing the Army to address some challenges with the relatively low number of personnel in some specialties. Army officials told us that the realignment would also allow the Army to alter the skillset of personnel who are currently operators/maintainers of the equipment into purely operators, while maintainers would take on some additional responsibilities. Further, by 2025 the Army plans for current Patriot operators and maintainers to maintain and operate a variety of Army air and missile defense systems, as opposed to being assigned solely to Patriot. Migrating Patriot to IBCS amplifies training challenges by adding new training into the Army’s Patriot training schedule. Further, for a period of time the Army will be training personnel on three different versions of Patriot—PDB-7, PDB-8, and IBCS. Army officials told us that due to the high deployment frequency of the Patriot force, the current training schedule does not completely prepare Patriot operators on all tasks before deployments. To address this, the Army revised the training certification progression so that high priority training is completed before deployment, and less important training can occur after deployment. However, to prepare for the transition to IBCS, the warfighter requires additional training on how to effectively operate the equipment under an airspace complicated with data from multiple sensors. This increasingly complex training required by Patriot operators could cause further issues with the Patriot training schedule in the future. Over the long term, officials told us that the Army plans to address some of these challenges by updating the training certification program to match up with the changes to the Patriot system (for instance more emphasis on joint operations) and by adding more advanced certification levels that would include skills not currently included as part of the certification process. While Army officials told us they are in the initial stages of implementing changes to the training program, they expect it to be implemented by 2025 when the Army completes the transition of all Patriot units to IBCS. The modernization fielding plan the Army is pursuing also poses a near- to mid-term maintenance challenge. The Army currently plans to perform comprehensive maintenance on only one Patriot battalion per year through fiscal year 2021 in order for battalions to be available for modernization, training, and operations. However, Army officials told us they will not be able to complete maintenance on all 15 Patriot battalions within the expected 10-year life cycle at that rate. As a result, officials confirmed that the Army is assuming an elevated risk of equipment breakdown. To mitigate this challenge in the short term, the Army is performing less comprehensive maintenance after every deployment and maintaining a sizable inventory of spares for those parts that have high failure rates. As more Patriot battalions become IBCS-compatible, the Army is considering ways to schedule comprehensive maintenance on more than one battalion per year. However, the officials were unsure if they would be able to have two battalions worth of equipment offline for maintenance and still have enough availability to meet training and operational demands. The Army regularly coordinates on the status of doctrine, organization, training, materiel, leadership, personnel, and facilities implications of Patriots transition to IBCS through the use of quarterly transformation summits. These summits are internal meetings that include decisionmakers from all of the key domains within the Army that need to synchronize on integrated air and missile defense issues, including training, doctrine, leader development, and facilities. Briefings from these summits show that the Army officials discuss modernization and maintenance schedules, training strategy, and facility needs, among other topics at these summits. Army officials told us that as a result of these meetings, the Army decided to alter the Patriot deployment duration from 12 months to 9 months, concluding that this change would have a minimal impact on the modernization and training schedules, while providing the same operational support to combatant commands. In implementing the deployment duration change the Army will keep five battalions over the next 5 years on the 12-month deployment schedule, while all other Patriot deployments will last for 9 months. Army officials said that this fluctuation was necessary in order to allow enough time for other Patriot battalion modernization, testing, and training to occur— information they were aware of because of the summit discussions. As a cornerstone of the Army’s air and missile defense architecture, the Patriot system is deployed worldwide in defense of the United States and its allies. The program faces multiple challenges to overcome the obsolescence of a system that has been fielded for decades, improve capabilities to address ever-evolving threats, and complete its transition from a stand-alone system to an integrated component of the Army’s Integrated Air and Missile Defense. The Army has spent approximately $1.1 billion since 2013 to develop and procure Patriot upgrades and has requested another $1.8 billion, which includes funding for a long-term radar solution, over the next five years. A modernization program of this magnitude and complexity demands high-level oversight to ensure that the upgrades are completed on time, within planned cost, and that they provide the intended capabilities. In the long term, the Patriot system will no longer be Patriot as we know it but will be broken down into its major components—a radar, launcher, and a missile—integrated with Army’s Integrated Air and Missile Defense System of Systems. Of the three remaining components, the Army has already defined the missile as a separate major defense acquisition program and currently plans to do the same for the LTAMD sensor solution, which accounts for $364 million of the requested $1.8 billion over the next five years. Continuing to separately manage and track progress for these components should help provide Congress with the oversight and accountability it needs to make important investment decisions. Although the Army estimated in 2013 that costs for Patriot upgrades would meet the threshold to be considered a major defense acquisition program (MDAP), the Army chose to incorporate the Patriot upgrade efforts into the existing Patriot program which made certain oversight mechanisms inapplicable. The Army would have put itself in a much better position to oversee its Patriot upgrade efforts had it made the decision in 2013 to manage Patriot upgrades as a separate major defense acquisition program. Should operational testing for PDB-8 and PDB-8.1 reveal performance shortfalls in the near and mid-term upgrades tested, the additional development required could present an opportunity for DOD to provide a level of oversight and accountability not seen by the Patriot upgrade efforts so far. Beginning any additional development with cost, schedule, and performance estimates—informed by an estimate of the amount of development costs the upgrade effort has incurred since 2013—would provide DOD and congressional decisionmakers an essential oversight tool, particularly when considering future budget requests. Further, regular comparisons of program cost, schedule, and performance against initial estimates enhance decisionmakers’ understanding of the significance of any increases or decreases in cost or schedule as a program evolves. In the event that operational test results for PDB-8 and PDB-8.1 reveal performance shortfalls that require additional development of the near and mid-term upgrades tested, we recommend that the Secretary of Defense direct the Secretary of the Army to establish mechanisms for overseeing those upgrades commensurate with other major defense acquisition programs, to include: 1. An initial report—similar to a Selected Acquisition Report—as soon as practical following operational testing for both PDB-8 and PDB-8.1, on the near and mid-term upgrades evaluated during these tests, including: cost, schedule, and performance estimates for any additional development that is needed; and an estimate of the amount of development costs it has incurred since 2013 for near- and mid-term Patriot upgrades operationally tested along with PDB-8 and PDB-8.1. 2. Annual updates to Congress comparing the latest cost and schedule estimates against the initial estimates and providing explanations for any major deviations until development is complete. We provided a draft of this report to DOD for comment. DOD provided us with written comments which are reprinted in appendix IV. DOD also provided technical comments, which were incorporated as appropriate. DOD partially concurred with our recommendations to provide an initial report—similar to a Selected Acquisition Report—and to provide annual updates to Congress in an effort to establish oversight mechanisms commensurate with other major defense acquisition programs for upgrades operationally tested with PDB-8 and PDB-8.1 in the event that operational test results reveal performance shortfalls that require additional development. In its response, DOD stated that system software updates currently being performed for Patriot, such as PDB-8 and PDB- 8.1, will cease with updates transitioning to IBCS. It also noted that future post deployment build updates will be developed and tested for IBCS as part of the Army’s IAMD program, which is subject to acquisition oversight and reporting required by law and regulation. Further, DOD noted that future development and testing of the LTAMD sensor will also be subject to acquisition oversight and reporting required by law and regulation. DOD stated that using existing oversight and reporting mechanisms for these major defense acquisition programs would more accurately reflect the development program and is more appropriate than introducing additional non-standard reports. DOD’s response focuses on tracking and reporting progress on other MDAPs without clarifying how or if it will track progress on current PDB-8 and PDB-8.1 efforts. The IAMD program has already established its planned content in a baseline, and details for the LTAMD sensor program are still being determined. Regardless, tracking and reporting progress on the pre-existing IAMD program or future development LTAMD sensor program will not provide Congress with oversight and accountability on the outcomes for current work on the near- and mid-term upgrades tested with PDB-8 and PDB-8.1. As such, we maintain our position that the Secretary of Defense should take the recommended actions to direct the Army to establish mechanisms for overseeing any additional work on those upgrades commensurate with other major defense acquisition programs, by providing an initial report that is similar to a Selection Acquisition Report and annual updates to Congress that compare the latest cost and schedule estimates against the initial estimates for PDB-8 and PDB-8.1 upgrades. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, and the Secretary of the Army. The report is also available at no charge on the GAO website at http://www.gao.gov. Should you or your staff have any questions about this report, please contact me at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. To determine the current status of the Patriot system’s performance and the extent to which it addresses warfighter needs, we did the following: 1. To determine the current status of the Patriot system’s performance, we reviewed briefings from the Lower Tier Project Office in Huntsville, AL and from the Capabilities Development and Integration Directorate at Fort Sill in Lawton, OK on the current system’s performance specifications. To determine the extent to which the current version is meeting its performance requirements, we reviewed 2013 limited user test results from the Director, Operational Test and Evaluation (DOT&E) to see how well the Patriot system performed against its performance parameters as defined in the capabilities development and production requirements documents. In addition, we obtained the Patriot’s Post Development Build-7 (PBD-7) conditional material release “get well” plans, which outline the performance shortfalls of PDB-7 that need to be mitigated. We also discussed these shortfalls with officials from DOT&E in Arlington, VA; the Army Test and Evaluation Command at Fort Bliss in El Paso, TX who conducted the PDB-7 limited user test; the Lower Tier Project Office; and the Capabilities Development and Integration Directorate. 2. To determine the extent to which the current version of the Patriot system is meeting warfighter needs to address the growing threat, we reviewed warfighter operational needs statements, which document requests from the warfighter to the Army for urgent, real-time Patriot capabilities and other needed upgrades. We assessed the reliability of the currently open Patriot-related operational needs statements from 2013 by comparing the list of operational needs statements obtained from the Capabilities Development and Integration Directorate to those received from the Capabilities Integration Division of the Department of the Army Military Operations in Arlington, VA. Based on our review of the data and interviews with officials at both locations, we determined that the data were sufficiently reliable for the purposes of our reporting objectives. We also held discussions with these officials about the unfulfilled operational needs statements and the Army’s plan for addressing them. In addition, we interviewed combatant command officials from the Pacific Command in Honolulu, HI; the European Command in Stuttgart, Germany; and the Central Command in Tampa, FL to obtain views on Patriot performance needs from various combatant commands. To assess the extent to which the Patriot system upgrades will address capability needs and describe the cost, schedule, and testing plans associated with those upgrades we did the following: 1. To determine the various Patriot capability needs, we began by reviewing the validated air and missile defense capability gaps, which the program used as a foundation for its 2013 requirements documents. We examined these Patriot-related gaps listed in the 2011 Army Functional Concept for Fires Capability-Based Need Assessment Functional Needs Analysis and Functional Solution Analysis reports. Based on our analysis of these documents and additional Army briefings and plans, we identified a selection of high- priority critical air and missile defense gaps that were related to the Patriot program. We also reviewed requirements in the Patriot Increment 3 Capability Development Document related to training and obsolescence and sustainment and discussed these requirements with Army officials at the Lower Tier Project Office and the Air Defense Artillery School at Fort Sill in Lawton, OK. 2. To understand the evolving threat and how it is driving capability needs for the Patriot system, we reviewed the 2011 and 2015 System Threat Assessment Reports and discussed the Patriot-related threat assessment findings with officials from the Missile and Space Intelligence Center in Huntsville, AL and the Capabilities Development and Integration Directorate. 3. To describe the cost, schedule, and testing plans for the Patriot upgrades, we obtained and analyzed detailed cost data derived from program budgets, program schedules for testing and fielding, and test and evaluation master plans. We discussed these plans with officials from DOT&E; the Capabilities Development and Integration Directorate; and the Lower Tier Project Office. We focused our cost review on two Patriot program budget lines, which detail the U.S. contribution to development and procurement costs for planned upgrades, and a third budget line providing initial development funding for the Lower Tier Air and Missile Defense (LTAMD) sensor solution. Planned costs for fiscal years 2017 through 2021 are based on detailed Army planning budget data supporting the President’s budget for fiscal year 2017. We deflated these budget numbers to base year 2017 dollars. 4. To determine the extent to which planned upgrades will address capability needs, we obtained detailed information from the Capabilities Development and Integration Directorate officials mapping each of the planned upgrades to the capability need it is intended to help address. We also obtained and reviewed the schedule and scope of planned operational testing in the System Evaluation Plan to determine when the upgrades would be evaluated. Further, we reviewed the scope of the analysis of alternatives currently underway to determine what capability needs the radar and launcher alternatives being considered are intended to address and discussed these needs with Army officials from the Capabilities Development and Integration Directorate and the Lower Tier Project Office. To determine the level of oversight and accountability provided for the upgrades, we received information regarding how and why the upgrades were executed under the existing Patriot program from Army officials. We reviewed prior legislation and related reports since 2012 to understand Congress’s concerns on oversight and accountability for the latest Patriot upgrades. We then reviewed DOD guidance documents and briefings to determine the level of oversight planned for the long-term radar solution. We also reviewed DOD acquisition regulations and related statutes to determine the typical requirements for facilitating Congressional oversight and accountability of major defense acquisition programs. To assess the extent to which the Army’s plan for fielding modernized Patriots synchronizes with training schedules and operational demands, we analyzed the Army’s fielding plan as well as operational and training schedules. We also interviewed knowledgeable Army officials to discuss how the fielding plan was chosen, the benefits and challenges associated with the chosen plan, as well as any effects of the plan on operations, personnel, doctrine, organization, testing, and training. To assess the extent to which DOD’s guidance for conducting its LTAMD analysis of alternatives (AOA) meets GAO best practices, we obtained Department of Defense AOA guidance documents. These documents consist of a directive from the Army Headquarters directing the Army Training and Doctrine Command Analysis Center to conduct the LTAMD AOA study, a study plan developed by the Army Training and Doctrine Command Analysis Center, and guidance from the Office of the Secretary of Defense for Cost Assessment and Program Evaluation (CAPE). We compared the processes outlined in the guidance documents to the 22 best practices GAO identified in GAO-16-22. We also met with officials from CAPE to discuss GAO best practice processes that were not documented in the guidance documents and supplemented our analysis with some of this information. We used a five-point scoring system to evaluate how well the LTAMD AOA guidance documents conformed to each of the 22 best practices. We then used the average of the scores for the best practices under each of the four characteristics—well- documented, comprehensive, unbiased, and credible—to determine an overall score for each characteristic. The results of GAO’s analysis underwent four separate levels of internal review to ensure accuracy as well as cross-checking the scores throughout the analysis for consistency. In addition, we provided the initial results of our analysis to officials in the CAPE and Army Training and Doctrine Command Analysis Center for review and received technical comments, which we incorporated, as appropriate, into our final analysis. To characterize our final results, if the average score for each characteristic was “met” or “substantially met,” we concluded that the AOA process conformed to best practices and could therefore be considered reliable. We conducted this performance audit from June 2015 to August 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. As part of our review of the Patriot system, we assessed the extent to which the Department of Defense’s (DOD) guidance for conducting its Lower Tier Air and Missile Defense (LTAMD) analysis of alternatives (AOA), which is evaluating material modernization solutions for the current Patriot radar and launcher for use with the Integrated Air and Missile Defense (IAM) Battle Command System (IBCS), meets GAO best practices and found that the guidance documents substantially met GAO standards to be considered reliable. We compared the processes outlined in the LTAMD AOA guidance documents to GAO best practices because the LTAMD AOA report was not available at the time of our review. The LTAMD AOA guidance documents provide the AOA study team with a high-level roadmap for how to conduct the LTAMD AOA by outlining processes to identify and select the alternatives, metrics, models, and scenarios for use throughout the AOA process. While we cannot make conclusions about the final AOA report until it is finalized and released, by comparing the processes described in the LTAMD AOA guidance documents to the 22 GAO best practices, we can make conclusions on the quality of the processes used to develop it. If the processes are of high quality, then the AOA study team has a good roadmap, which, if followed, could produce a high-quality, reliable AOA. Based on our analysis, the LTAMD AOA process described in its guidance met or substantially met the criteria to be considered well-documented, comprehensive, unbiased, and credible. While we found that the LTAMD AOA guidance documents met or substantially met 18 of the 22 best practices GAO established for the AOA process to be considered reliable, our review also found that contrary to GAO best practices, the final AOA report will not select a preferred solution. Specifically, the LTAMD AOA guidance did not instruct the study team to assign relative importance to the criteria that are used to compare the options or to select a preferred solution for a modernized radar and launcher as part of the final AOA report. According to CAPE officials involved in the LTAMD AOA efforts, the purpose of this AOA is to provide an analytic comparison of the options based on the criteria but to then allow external decisionmakers to determine the relative importance of each criterion and derive their own preferred solution. CAPE’s position is that GAO’s best practice of assigning relative importance to criteria is not appropriate for strategic investment decisions such as this. In contrast, GAO best practices recommend that solutions be compared based on pre-established criteria that reflect the relative importance of the criteria because not reflecting its relative importance up front can oversimply results and potentially mask important information leading to an uninformed decision. In addition, GAO best practices state that a preferred alternative should be identified and a rationale for that decision be included as part of an AOA report. While a recommended solution in the AOA report does not have to be binding, without one, decisionmakers outside of the AOA process may misinterpret the analysis within the AOA report and potentially come to a biased decision. In October 2015, GAO identified 22 best practices to provide a framework for conducting an AOA and help ensure that entities consistently and reliably select a preferred solution that best meets mission needs. To identify a high-quality, reliable AOA process, GAO grouped the 22 best practices under four characteristics. These characteristics evaluate whether the AOA process is well-documented, comprehensive, unbiased, and credible. “Well-documented” means that the AOA process is thoroughly described in a single document, including all source data, has clearly detailed methodologies, calculations and results, and that selection criteria are explained. “Comprehensive” means that the AOA process ensures that the mission need is defined in a way to allow for a robust set of alternatives, that no alternatives are omitted, and that each alternative is examined thoroughly for the project’s entire life-cycle. “Unbiased” means that the AOA process does not have a predisposition toward one alternative or another; it is based on traceable and verifiable information. “Credible” means that the AOA process thoroughly discusses the limitations of the analyses resulting from the uncertainty that surrounds both the data and the assumptions for each alternative. Table 10 provides an explanation of how individual best practices are grouped under each characteristic. Overall, the DOD’s LTAMD AOA guidance documents met or substantially met the four characteristics of a high-quality and reliable AOA process. To make this determination, we reviewed and scored how well the guidance documents addressed each of the 22 best practices. We scored the 22 best practices using a five-point system as follows: “met” means the LTAMD AOA guidance documentation demonstrated that it completely met the best practice; “substantially met” means that it met a large portion of the best practice; “partially met” means that it met about half of the best practice; “minimally met” means that it met a small portion of the best practice; and “did not meet” means that it did not meet the best practice. We found that the LTAMD AOA guidance documents met or substantially met 18 of the 22 best practices. We then took the average of those best practice scores that aligned with each of the four characteristics, as shown above in Table 9, to derive a final score for each characteristic. Table 11 provides the average score of the best practices under each characteristic. The Patriot program has made notable progress in the development and procurement of near and mid-term upgrades since the upgrade efforts began in 2013. Up to this point, significant costs for development and procurement have already been incurred, costs and activities are winding down, and the program plans to release the first of two major hardware and software upgrades next year. In sum, the Army has spent about $1.1 billion of the $2.9 billion planned between fiscal years 2013 and 2021 to address Patriot capability needs, as seen in figure 9. Of the $1.8 billion currently planned between fiscal years 2017 and 2021, $645 million is for development. Of those development funds, the majority, $364 million, is allotted to developing the future radar solution, the Lower Tier Air and Missile Defense (LTAMD) sensor, which the Army currently plans to define as a separate major defense acquisition program (MDAP). Further, of the $645 million in development, only about $280 million is currently planned between fiscal years 2017 and 2021 for developing software and hardware upgrades. The program has already spent about $210 million for the development of near and mid-term software and hardware upgrades between fiscal years 2013 and 2016. Aside from the future radar development, there does not appear to be a new wave of development activities beginning in the near future. Funding for PDB-8 was already completed in fiscal year 2016 with fielding planned for fiscal year 2017. Further, as seen in figure 10, costs planned for software development appear to be tapering off toward the end of the Future Years’ Defense Program in fiscal year 2021 when the program expects to release PDB-8.1. Near-term and mid-term upgrade procurement activities also appear to be winding down. Most of the defined hardware upgrades are already in production. Further, many of these upgrades were already mature with relatively little being spent on hardware development for the purposes of adapting them for Patriot or maximizing their benefit to the system. Although the program is still planning to spend $1.15 billion in procurement between fiscal years 2017 and 2021, which includes ongoing upgrades to address obsolescence issues, six of the nine near- term and mid-term hardware upgrades and supporting equipment have begun production, as seen in figure 11. Lastly, costs planned for procurement upgrades appear to be tapering-off toward the end of the Future Years’ Defense Program in fiscal year 2021, as seen in figure 12. Currently, funds planned to continue beyond fiscal year 2021 are for ongoing upgrades to address obsolescence issues, for completing the purchase of launcher modifications, and for continuing investments in training upgrades. In addition to the contact named above, LaTonya D. Miller, Assistant Director; Kevin L. O’Neill; James P. Haynes II; Meredith Allen Kimmett; Randy F. Neice; Jenny Shinn; David L. Richards; Jennifer V. Leotta; Karen A. Richey; Alyssa B. Weir; Katherine Shea Lenane; Stephanie M. Gustafson; Oziel A. Trevino; and Joseph W. Kirschbaum made key contributions to this report.
Patriot is a mobile Army surface-to-air missile system deployed worldwide to defend critical assets and forces. To respond to emerging threats and address a diverse set of capability needs, the Army has spent nearly $1.1 billion and requested $1.8 billion over the next 5 years to upgrade Patriot, begin developing a long-term radar solution, and integrate Patriot components into a central network and command and control system—the Integrated Air and Missile Defense. A House report included a provision for GAO to assess, among other things, the status of the Patriot system and the Army's strategy for completing the upgrades. Among other things, this report examines (1) the extent to which the latest upgrades will address Patriot capability needs and (2) the level of oversight and accountability provided for the upgrade efforts. To conduct this review, GAO examined Army and program documents including test plans and schedules. GAO also interviewed Department of Defense (DOD) and other relevant officials. While the currently fielded version of the Army's Patriot surface-to-air missile system is an improvement over prior versions, the Army currently plans to spend about $2.9 billion between fiscal years 2013 and 2021 on an upgrade strategy to address a variety of capability needs. These efforts are intended to improve the system's performance, reliability, and communications as well as address obsolescence and sustainment issues. The figure below shows planned costs for ongoing efforts, near-term upgrades which begin fielding prior to fiscal year 2017, mid-term upgrades which begin fielding between fiscal years 2017 and 2021, and long-term upgrades—including a long-term radar solution. Key among the mid-term efforts are major software upgrades called Post Deployment Build-8 (PDB-8) and PDB-8.1, which are intended to improve communications and system capabilities against threats. The Army plans to begin operational testing for PDB-8 and PDB-8.1 in fiscal years 2016 and 2019, respectively. These testing results will reveal the extent to which the near and mid-term upgrades work as intended. Although the Army estimated in 2013 that costs for Patriot upgrades would meet the threshold to be considered a major defense acquisition program (MDAP), the Army chose to incorporate the Patriot upgrade efforts into the existing Patriot program which made certain oversight mechanisms inapplicable. Further, it decided not to put a mechanism in place to track or report the upgrades' progress against initial cost, schedule, or performance estimates, similar to those generally required of MDAPs, which GAO considers essential for program oversight. Operational testing for PDB-8 and PDB-8.1 provides the Army with an opportunity to increase oversight. If performance shortfalls indicate a need for further development, the Army will have an opportunity to track progress on these upgrades to provide the oversight tools decisionmakers need to make important investment decisions. GAO recommends that the Secretary of Defense direct the Army to establish oversight mechanisms, similar to those for major defense acquisition programs, if additional development is required for upgrades operationally tested with PDB-8 and PDB-8.1. DOD partially concurred, focusing its response on plans to track other MDAPs, but did not clarify how or if it would track current PDB-8 and PDB-8.1 progress. GAO maintains DOD should provide oversight for any additional PDB-8 and PDB-8.1 development.
In October 1996, GSA acquired a 13.18-acre site, immediately adjacent to the College Park Metrorail station, in College Park, MD, specifically for the new CFSAN facility at a cost of $4 million. Part of the site is located on a Prince George’s County-designated floodplain. GSA has subsequently demolished a building that was on the site when it was acquired. When the CFSAN facility is completed, it is to have four stories above ground and a basement; and it is to have about 410,000 square feet of office, laboratory, and support space. The building is scheduled to be ready for occupancy in October 2001. The total cost to design and construct the building, including the cost of the land, is estimated to be about $86 million. The Federal Emergency Management Agency (FEMA) is the federal agency responsible for floodplain management. FEMA has promulgated regulations with floodplain management criteria to be used by state and local governments. In the state of Maryland, the Department of the Environment (MDE) is responsible for floodplain management. As part of FDA’s consolidation of its programs in the Washington, D.C. metropolitan area, FDA is to vacate the federal office building at 200 C Street, SW, Washington, D.C. FDA plans to decommission the laboratories in the building—clear the hazardous chemical residue—prior to turning the space back to GSA for reassignment. The Architect of the Capitol has expressed some interest in this building for congressional use. To determine GSA’s authority to construct a new CFSAN facility for FDA, we reviewed the legislation that authorized the Secretary of Health and Human Services and the Administrator of GSA to consolidate FDA facilities and reviewed subsequent legislation relating to the appropriation of funds for the project. To determine whether the project had met the state of Maryland requirements for building the facility on a floodplain, we reviewed actions taken by GSA and the project’s design consultants to obtain the needed construction authorizations from MDE. We also reviewed the floodplain studies prepared specifically for this project to show the effect the project would have on the floodplain. To determine whether FDA planned to place computers in the basement of the new building, we interviewed GSA and FDA personnel involved in the project. To determine whether steps have been taken to mitigate the risks involved in placing computers in the basement and who was involved in making the decision to place the computers in the basement of the building, we reviewed project documents and interviewed GSA and FDA project management officials, FDA managers responsible for information management resources, and representatives of the firms responsible for designing the new facility. We also obtained from the design consultants a detailed description of the features incorporated into the design of the new facility to protect the building from external flood waters. We reviewed the final construction drawings and construction specifications for the building to assure ourselves that the features described to us had been incorporated into the design of the building. We also visited the construction site to view the constructed basement slab and walls and systems and equipment being installed to mitigate the risk of water entering the building, to confirm that the building will have some of the features described to us. We did our review from May through September 1999, in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from GSA’s Administrator and FDA’s Commissioner. In 1990, the Food and Drug Administration Revitalization Act became law. The act authorized the Secretary of Health and Human Services (HHS) and the Administrator of General Services to enter into contracts for the design, construction, and operation of a consolidated FDA administrative and laboratory facility. In addition, the FDA Revitalization Act expressly authorized the appropriation of $100 million for the project in fiscal year 1991 and authorized the appropriation of such funds as may be necessary for subsequent fiscal years. GSA’s fiscal year 1991 appropriations act did not include funds for this project. GSA’s fiscal year 1992 appropriations act appropriated $200 million for consolidation, site acquisition, planning and design, and construction of new FDA facilities in Montgomery and Prince George’s Counties, MD. The Conference Report accompanying GSA’s appropriations act stated that the conferees provided these funds to begin the process of consolidating FDA from its existing 34 buildings and 11 locations to campuses in Maryland’s Montgomery and Prince George’s Counties. The report further stated that the president and Congress had expressed their support for this project by enacting the FDA Revitalization Act that specifically authorized construction of new administrative and laboratory facilities for FDA. The Conference Report accompanying the fiscal year 1992 appropriations act also contained language directing FDA, GSA, HHS, and the Office of Management and Budget (OMB) to submit a plan for the consolidation project’s future funding needs to the appropriations committees by no later than December 31, 1991. The Senate Appropriations Committee Report accompanying the Treasury, Postal Service, and General Government appropriations act for fiscal year 1993 stated that despite clear instructions, the administration had not expended any of the funds already appropriated, and no funding plan had been submitted as previously directed. The Committee stated that it strongly supported the project because, in addition to the inefficiencies resulting from being scattered among so many different buildings, many of FDA’s facilities were outmoded and obsolete, even hazardous. The FDA Revitalization Act and subsequent appropriations acts, particularly the act containing GSA’s appropriations for fiscal year 1992, authorized GSA to construct the FDA facility in College Park, MD. OMB approved a consolidation plan for the FDA headquarters programs on March 15, 1994. This plan called for CFSAN to be located in Prince George’s County. GSA officials knew the construction site was on a Prince George’s County- designated floodplain when they bought the land. Before purchasing the property, GSA hired an engineering firm to complete an environmental assessment and floodplain studies to determine the viability of constructing a facility on the site and the effect that the project would have on the site. The preliminary floodplain study, which focused on the viability of constructing on the site, concluded that constructing the proposed facility on the site would not increase the 100- the site was suitable for the proposed development; and because the building would be in the very upper reaches of the watershed, the actual peak 100-year flood elevation would affect the building for only a short period of time before it receded. FEMA has not designated a floodplain on this site. The project director for the firm that did the floodplain study for GSA told us that the County is more conservative in its floodplain designations than FEMA. He said that the County’s designations take into account the existing built environment and anticipated future developments—new construction—in the area when calculating the floodplain area, but FEMA takes into account only the existing built environment and open areas. Figure 1 shows the existing 100-year floodplain, as designated by Prince George’s County. The floodplain study contained the state of Maryland’s and the County’s requirements that would have to be met for construction on the floodplain. State and County regulations prohibit a project from increasing the 100- year flood elevations outside the project site. They also require that the lowest floor of any structure be at least 1 foot above the 100-year flood elevation, unless it is in the overall public interest for it to be otherwise. If a variance is granted and a basement is authorized, it must be waterproofed. The State and Prince George’s County required specific documentation before GSA could obtain approval to construct the foundation for the building. This documentation had to include the floodplain hydraulic calculations 100-year floodplain evaluations of the impact of the construction on adjacent properties. The required documentation also had to show that the first floor will be at least 1 foot above the 100-year flood elevation; how the basement will be waterproofed; how sump pumps and other drainage systems were to be used; and that the building will be able to withstand the force of the water in event of a flood, i.e., it will not float up. After receiving the required documentation, MDE’s Water Management Administration issued an Authorization to Proceed (No. 97-NT- 0711/199766248) on September 11, 1997, which permitted GSA to begin constructing the building foundation and to relocate existing utility lines. Citing the great public benefit from the project and the site constraints that prohibited a taller or wider building, MDE agreed to a variance permitting a basement below the 100-year flood elevation. However, MDE required that the basement be waterproofed to comply with MDE and FEMA regulations. The elevations for the two conditions were compared to determine the effects of the project. This comparison shows that the proposed FDA development would not have any adverse impact on the flood elevations upstream of the site. The computations do indicate that a slight increase (0.1 foot) in the 2-year flood elevation will occur at the downstream end of the site. However, the computations indicate the increase would be dissipated before the upstream end of the site. The 10- and 100-year flood elevations will be lower slightly in some areas for proposed conditions, because the new building will be further away from the stream than the existing building. With the building further away from the stream, the area available to move the flow will increase, causing the flood elevations to decrease. MDE requested that the following three items be submitted to it for review and comment before it would authorize the superstructure of the facility to be built. Final HEC-2 backwater computations of the floodplain: these were to include all new changes made to the floodplain due to the new FDA facility. GSA advised us that this was submitted to MDE on March 2, 1999. Structural design calculations of the basement walls and foundations: these were to show that the design took into account the additional hydrostatic forces that would result when high water tables are experienced. GSA advised us that the structural calculations were submitted to MDE on August 9, 1999. Two sets of final signed construction plans: these were to indicate what will be built on the site as well as what topographic changes will occur on the site. GSA advised us that these construction plans were submitted to MDE on August 27, 1999. On September 7, 1999, MDE approved the construction of the superstructure. Computer operations are to be housed in the basement of the CFSAN facility. There are to be between 15 and 20 servers located in the basement, along with other building support components—e.g., mechanical space, fitness center, health center, and laboratory storage. FDA officials informed us that after an exhaustive review of the related constraints, alternatives, and opportunities, the decision to locate the main computer room in the basement of the new facility was reached by consensus of the project team. This team consisted of the architect-engineering consultants and representatives of GSA and FDA. The FDA representatives were selected from FDA’s Division of Facilities Planning, Engineering and Safety and from CFSAN, which is to occupy the new facility. Every kind of data that CFSAN maintains could potentially be stored in these computers. This would include data relating to all CFSAN programs, such as premarket approval, research, industry surveillance, finance, personnel, and any other data generated and/or used by CFSAN. If the computers were damaged by a flood, FDA officials estimated that it would cost about $4 million to replace and install the computers, peripherals, network, and other computer-related equipment; load the software; and retrieve and restore the data. CFSAN currently backs up the data on its servers daily, with a copy transferred to an off-site storage area on a weekly basis. CFSAN officials told us that backup and off-site storage for the new facility will be developed that are appropriate for the nature of the systems installed, the data stored, and the risk factors at the time the new facility is occupied. FDA officials told us that no matter where the computer room is located, there is always the potential for water damage from internal sources. They believe that with the steps that have been taken by the design team to protect the building from an external flood, the likelihood of internal water damage (e.g., broken pipe, leak in the roof, or accidental fire sprinkler activation) will be greater than the likelihood of damage from a flood condition. The new facility will have several different, but complementary, systems to mitigate damage from water entering the building. It has been designed and is being constructed essentially as a hull of a ship, with the top of the basement wall and waterproofing extending to the floor slab of the first floor of the building, which is 1-1/2 feet above the 100-year floodplain level. Initially, the design team intended to construct a building of five stories above grade with no basement. However, as the design process evolved with involvement from the local communities, a height restriction of 84 feet was placed upon the site by the College Park-Riverdale Transit District Development Plan for the area surrounding the College Park Airport. To accommodate this limitation, the building program had to include a basement. GSA and FDA officials told us that if they could have obtained a waiver from the height limitation and were able to build a five-story building, the main computer room would have been located on the first floor. However, because a basement was necessary, they felt the use of the basement space for support areas was consistent with common building design practices, met the needs of the program, and provided better control for the ambient temperature requirements of the computer room. The design team decided to give priority to window space for offices and laboratories. The design consultants and FDA officials told us that putting the computer operations on another floor higher up in the building would have forced program space, either offices or laboratories, into the much less desirable basement space with no windows. The design team explained that with the way the building has been designed, every laboratory and laboratory office will have the benefit of natural daylight. Half of the offices are to have direct natural light, and the other half are to receive indirect natural light through clerestories in the office corridor walls. They told us that they also considered physical security needs to ensure that the computers would not be vulnerable to vandalism or interference from outside sources. During our review, we visited the construction site near the end of the foundation construction phase of the project to observe how the basement was being constructed and verify that some of the design features we were told about had been incorporated into the facility. We also reviewed the final construction drawings and the specifications for the construction of the superstructure of the building to confirm that the plans included the systems and equipment we were told had been designed into the facility to mitigate the risk of water entering the building. This work verified that the following features have been built into the facility, or are included in the construction drawings and specifications to be used to complete the facility. The basement walls and floor slab have been constructed of reinforced concrete. The waterproofing system that has been installed creates a waterproof envelope under the basement slab and up the basement walls to protect the basement from water infiltration. The building has a complete underslab and perimeter drainage system to remove water coming up from below the structure, as well as water approaching the outside of the facility at ground level. The piping system installed to remove water terminates in a pumping station located outside the waterproofed basement and therefore will not bring groundwater into the building. Four pumps capable of pumping 500 gallons per minute each are to be installed in the pumping station. These pumps are to be sequenced to come on as the water inflow increases. The water is to be discharged into the stream located to the south of the site. All pumps are to have emergency power backup in the event of a power failure. Additionally, the main mechanical room, also located in the basement, is 3- 1/3 feet deeper than the rest of the basement floor, resulting in a very large retention area if a catastrophic event took place and water entered the building. Emergency drains are located about 1 inch above the depressed floor. These drains discharge into two sump pumps that discharge into the storm sewer in the parkway at the perimeter of the property. These pumps are also to be on emergency power. The computer room is to have a raised floor. The concrete slab constructed beneath the raised floor is depressed 12 inches below that portion of the basement outside the main mechanical room. Emergency floor drains have been provided in this depressed area that are connected to two sump pumps. Finally, the discharge pipes from all six of the sump pumps in the basement were designed to have check valves and alternate discharge pipes. Should flood waters rise above the level of the discharge pipes, the check valves would prevent this water from entering the pumps and divert the water from the sump pumps to the alternate discharge pipes, which discharge outside the building above the 100-year floodplain level. When we visited the construction site we also observed the structural system being installed to prevent the building from floating as a result of the buoyancy force caused by the hydrostatic pressure of flood waters. FDA officials informed us that the decision on where to place the computers was a part of the decision on the overall design and layout of the building. The representatives from the design consultants told us that this decision was made primarily by the design team on the basis of a thorough analysis of specific program needs, workplace factors, security requirements, and site constraints. The design team made recommendations, which included the basement location for the computers, to GSA, FDA, and CFSAN personnel for a decision. We were told that a number of computer telecommunications personnel from CFSAN and OIRM were involved in periodic meetings to plan the telecommunications space needs in the new building and develop telecommunication design guidelines with the telecommunications consultant and were involved in the decision on the placement of the computers. FDA officials told us that some computer staff expressed concerns in October 1997 about the placement of the computers. They said these concerns were forwarded to GSA and to the design team. Also in October 1997, the telecommunications consultants gave CFSAN and OIRM officials a copy of their College Park Design Guidance for Telecommunications Infrastructure for comment. In December 1997, FDA provided comments. CFSAN’s telecommunications representative expressed satisfaction that all items noted in the review documents were discussed and either clarified or modified for inclusion in a revised design for the telecommunications infrastructure. Further, FDA and CFSAN officials told us that the design of the new facility was presented to representative groups of employees during the design process, as well as to the National Treasury Employee’s Union stewards from FDA. In March 1999, FDA initiated a series of employee briefings on the new building. These briefings, we were told, were being conducted for employees from one or more of CFSAN offices at each session. They covered such topics as the basic design of the building; the current status and schedule for completion; and features of interest to employees, such as the food service area, library, auditorium, training rooms, employee parking, layout of laboratories, and office sizes. The Director of CFSAN said that the briefing sessions would continue until all of the employees moving to the College Park building have had an opportunity to attend a session. It is also planned that there will be some mock-ups of laboratory and office designs. In addition, the Director of CFSAN said that an e-mail address had been set up to which employees can send questions concerning the new facility, and FDA’s Office of Facilities has set up a Web page where progress is to be updated and CFSAN photos are to be archived. He said that employees have access to the Internet and can access this information. Further, once a month the Director is holding 1-hour meetings with all interested employees who are given the opportunity to raise questions and receive answers. A member of the planning team for the College Park facility has been asked to attend each of these latter meetings to answer any questions about the building and move. We provided copies of a draft of this report to the Administrator of General Services and the FDA Commissioner for comment. On December 1, 1999, we received oral comments from GSA’s National Capital Region Assistant Regional Administrator for the Public Buildings Service and from the Public Buildings Service’s Office of Portfolio Management. They concurred with the report without further comment. On December 2, 1999, we received oral comments from the Directors of CFSAN and FDA’s Division of Facilities Planning, Engineering and Safety. They concurred with the information as presented in the report and provided some technical clarifications that we have incorporated where appropriate. We are sending copies of this report to Representative Robert E. Wise, Ranking Democratic Member, House Subcommittee on Economic Development, Public Buildings, Hazardous Materials and Pipeline Transportation; Senator George V. Voinovich, Chairman, and Senator Max S. Baucus, Ranking Minority Member, Senate Subcommittee on Transportation and Infrastructure; Senator Paul S. Sarbanes; Senator Barbara A. Mikulski; Representative Steny Hoyer; the Honorable David J. Barrum, Administrator of General Services; and the Honorable Jane E. Henney, Commissioner, FDA. Copies will be made available to others upon request. If you have any questions about this report, please call me or Ron King on (202) 512-8387. A key contributor to this assignment was Shirley Bates. Bernard L. Ungar Director, Government Business Operations Issues The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touch- tone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on the construction of the Food and Drug Administration (FDA) facility for its Center for Food Safety and Applied Nutrition (CFSAN) in College Park, Maryland, focusing on: (1) the General Services Administration's (GSA) authority to construct a new facility for FDA in College Park; (2) whether the requirements for building on a floodplain had been met; and (3) the planned placement of computers in the basement of the new building, specifically whether; (a) steps had been taken or will be taken to mitigate the risk of damage from water entering the basement of the building, and (b) CFSAN staff were involved in the decision to place the computer operations in the basement. GAO noted that: (1) GSA's authority to construct the FDA facility in College Park, MD, is derived from the FDA Revitalization Act and subsequent appropriations acts; (2) the design team for the project has satisfactorily met the minimum requirements, set by the state of Maryland, to construct a building with a basement on a floodplain; (3) the basement was necessary because of a local building height restriction due to the proximity of the site to the College Park Airport; (4) although basements are not normally allowed in buildings on a floodplain in Maryland, the state granted a variance, in part because a taller or wider building was prohibited; (5) the new CFSAN facility has been designed with several systems to mitigate the risk of damage from water entering the building; (6) with the steps taken by the design team to protect the building from an external flood, FDA officials believe that the potential for internal water damage is a greater probability than is damage from a flood condition; (7) to protect the data stored on the computers, CFSAN officials plan to develop a mitigation plan for the new facility that they say will be appropriate to the nature of the systems installed, the data stored, and risk factors at the time the building is occupied; (8) the decision to locate the main computer room in the basement of the building was reached by consensus of the project team - the design team consultants and representatives from GSA and FDA; and (9) the FDA representatives included CFSAN telecommunications personnel and staff from FDA's Office of Information Resources Management.
North Korea is an isolated society with a centrally planned economy and a centrally controlled political system. The governing regime assumed power after World War II. Successive generations of a single family have ruled North Korea since its founding. According to the CIA World Factbook, under dictator Kim Jong Un, the grandson of regime founder Kim Il Sung, the regime currently controls all aspects of political life, including the legislative, judicial, and military structures. According to a Library of Congress country study, the North Korean leadership rewards members of the primary political party (the Korean Workers’ Party) and the military establishment with housing, food, education, and access to goods. Much of the population, however, lives in poverty, with limited education, travel restrictions, a poor health care system, no open religious institutions or spiritual teaching, and few basic human rights. North Korea exports commodities such as minerals, metallurgical products, textiles, and agricultural and fishery products. According to the CIA World Factbook, the North Korean economy is one of the world’s least open economies. The CIA World Factbook reported that as of 2012, its main export partners were China and South Korea. China is North Korea’s closest ally and accounts for almost two-thirds of its trade. North Korea has engaged in a number of acts that have threatened the security of the United States and other UN member states. Since 2006, North Korea has conducted a number of missile launches and detonated three nuclear explosive devices; torpedoed a South Korean naval vessel, the Cheonan, killing 46 crew members; and launched a disruptive cyberattack against a U.S. company, Sony Pictures Entertainment. In response to these actions, the United States and the UN imposed sanctions specific to North Korea from 2006 through 2015 (see fig. 1). The United States has imposed sanctions on North Korea and North Korean persons under EOs and a number of laws and regulations. EOs are issued by the President and generally direct the executive branch to either carry out actions or clarify and further existing laws passed by Congress. Administrations have invoked authority provided by the International Emergency Economic Powers Act, as well as other authorities, to issue EOs specific to North Korea. The UN Security Council issued five UNSCRs imposing sanctions specific to North Korea during this time period. (See fig. 1.) U.S. EOs specific to North Korea and the Iran, North Korea, and Syria Nonproliferation Act (INKSNA) authorize the United States to impose sanctions targeting activities that include weapons of mass destruction proliferation, trade in arms and related materiel, and transferring luxury goods. Sanctions that can be imposed pursuant to the EOs and INKSNA include blocking property and banning U.S. government procurement. UNSCRs target similar activities, and under the UN Charter, all 193 UN member states are required to implement sanctions imposed by the UNSCRs, such as travel bans, on North Korean and other persons involved in these activities. U.S. EOs specific to North Korea and INKSNA authorize the United States to impose sanctions targeting activities that include involvement in North Korean WMD and conventional arms proliferation and transferring luxury goods to North Korea. The most recent EO targets a person’s status as opposed to a person’s conduct. The EO targets a person’s status by authorizing the imposition of sanctions on persons determined, for example, to be agencies, instrumentalities, or controlled entities of the government of North Korea or the Workers’ Party of Korea. Table 1 provides examples of the activities and statuses targeted by EOs and INKSNA.registration of a vessel in North Korea by a U.S. person, and EO 13570 In addition, EO 13466 prohibits activities such as the generally prohibits a U.S. person from importing North Korean goods, services, or technology from North Korea. Sanctions that can be imposed pursuant to the EOs and law listed above include blocking property and interests in property in the United States, and banning U.S. government procurement and assistance. The EOs listed in table 1 create a framework within which the executive branch can decide when to impose sanctions against specific persons within the categories established by the EOs, according to Treasury and State officials. Treasury officials informed us that the process of determining whether to impose sanctions on one or more persons is (1) the result of a process wholly under the executive branch, and (2) driven by policy directives that prioritize issues of concern for the agencies. Treasury officials also noted that while Treasury does not consider itself to have discretion on whether or not to implement an EO, there is discretion at the interagency level regarding what sanctions programs should be focused on for individual designations, and how resources should be allocated among all relevant programs. INKSNA requires the President to provide reports every 6 months to two congressional committees that identify every foreign person with respect to whom there is credible information indicating that the person, on or after the dates specified in the act, has transferred to, or acquired from, North Korea, Syria, or Iran certain items listed by multilateral export control regimes, or certain nonlisted items that could materially contribute to weapons of mass destruction systems or cruise or ballistic missile systems. It does not require the President to sanction those persons, although it does require him or her to notify the congressional committees if he or she opts not to impose sanctions, including a written justification that supports the President’s decision not to exercise this authority. The President has delegated INKSNA authorities to the Secretary of State. State refers to section 73 of the Arms Export Control Act and section 11B of the Export Administration Act collectively as the Missile Sanctions laws. See 22 U.S.C. § 2797b and 50 U.S.C. App. § 2410b. Macao bank (Banco Delta Asia SARL).facilitation of financial transactions conducted by North Korean– related accounts that related to money laundering and illicit activities, including trade in counterfeit U.S. currency, counterfeit cigarettes, and narcotics, as grounds for its action. Five UNSCRs target North Korean–related activities that include WMD proliferation, cash transfers, and trade in luxury goods to North Korea (see table 2). Under the UN Charter, all 193 UN member states are required to implement sanctions in the UNSCRs that include imposing an arms embargo, prohibiting travel, and freezing assets. State officials told us that UN sanctions can amplify U.S. development of bilateral sanctions specific to North Korea, and that the United States has imposed sanctions beyond those required by UNSCRs. According to State officials, the United States has implemented the sanctions within the UNSCRs, pursuant to authorities including the United Nations Participation Act of 1945. U.S. officials informed GAO that obtaining information on North Korean persons has hindered the U.S. interagency process for imposing sanctions, and that a recent EO has provided them with greater flexibility to sanction persons based on their status as government or party officials rather than evidence of specific conduct. EO 13687 allows State and Treasury to sanction persons because they are officials of the North Korean government or of the Worker’s Party of Korea, instead of based on specific conduct. State and Treasury impose sanctions following an interagency process that involves reviewing intelligence and other information to develop evidence needed to meet standards set by U.S. laws and EOs, vetting possible actions within the U.S. government, determining whether and when to sanction, and announcing sanctions decisions. Since 2006, the United States has imposed sanctions on 86 North Korean persons, including 13 North Korean government officials and entities, under EO 13687. Commerce is the U.S. government agency that controls exports by issuing licenses for shipping goods that are not prohibited to North Korea. Agency officials cited obtaining sufficient information about North Korean persons to be their greatest challenge in making sanctions determinations. Most North Korea–specific sanctions authorities require a determination that a person engaged in a specific activity. Officials said that for sanctions to be effective, financial institutions need a minimum set of identifying information so that they can ensure they are blocking the right person. However, officials said that gathering information on the activities of North Korean persons and personal identifying information can be difficult because of the nature of North Korean society, whose citizens are tightly controlled by the government. Without sufficient information, the United States could mistakenly designate and therefore block the assets of the wrong person, particularly one with a common surname. State officials also cited obtaining sufficient information as a challenge to North Korean sanctions implementation, especially if the sanctions authority requires information indicating that the foreign person knowingly engaged in sanctionable activities. Officials in both agencies also said that they face challenges in obtaining information that can be made public in the Federal Register. Sony Cyberattacks On November 24, 2014, Sony Pictures Entertainment experienced a cyberattack that disabled its information technology, destroyed data, and released internal e-mails. Sony also received e-mails threatening terrorist attacks on theaters scheduled to show a film, The Interview, which depicted the assassination of Kim Jong Un. The Federal Bureau of Investigation and the Director of National Intelligence attributed these cyberattacks to the North Korean government. State and Treasury officials informed us that EO 13687, issued on January 2, 2015, gives them greater flexibility to impose sanctions despite the lack of complete information about persons’ activities. Treasury officials noted that sanctions under EO 13687 are status-based rather than conduct-based, which means that the EO allows agencies to sanction persons, for example, based on their status as North Korean government officials, rather than on their engagement in specific activities. EO 13687 allows Treasury to designate persons based solely on their status as officials, agencies, or controlled entities of the North Korean government, and to designate other persons acting on their behalf or providing them with material support. According to Treasury, EO 13687 represents a significant broadening of Treasury’s authority to increase financial pressure on the North Korean government and to further isolate North Korea from the international financial system. The White House issued the EO in response to North Korean cyberattacks on Sony Pictures Entertainment in November and December 2014. Treasury officials also noted that although the new authority allows them to target any North Korean government official, they continue to target activities prohibited under current sanctions, such as WMD proliferation. Treasury and State officials informed us that they have established processes to determine when and if the United States should impose sanctions related to North Korea. The processes involve reviewing evidence to identify sanctions targets, ensuring that they have adequate evidence to sanction, and imposing and publicizing the sanctions. (See fig. 2.) For North Korea–specific sanctions that fall under Treasury’s jurisdiction, Treasury officials said they investigate and collaborate with other U.S. government agencies to identify specific targets. The Office of Foreign Assets Control investigates the target’s activities and communicates with Treasury and other agency officials about the potential target. Where appropriate, Treasury will notify foreign authorities of the activities of the targeted person and seek commitment to stop the activity. State’s Bureau of International Security and Nonproliferation’s Office of Counterproliferation Initiatives leads an interagency process to evaluate whether a person’s activities are potentially sanctionable under EO 13382, which targets proliferation of weapons of mass destruction. The Office of Missile, Biological and Chemical Nonproliferation, also under the Bureau of International Security and Nonproliferation, leads the process for INKSNA, EO 12938, and the Missile Sanctions laws. The process begins with four State-led interagency working groups responsible for coordinating nonproliferation efforts involving (1) chemical and biological weapons, (2) missile technology, (3) nuclear technology, and (4) advanced conventional weapons. Each working group is chaired by a State official and consists of representatives from several U.S. government departments and agencies such as the Departments of Defense, Commerce, Homeland Security, Treasury, and Energy; the Federal Bureau of Investigation; and various intelligence community agencies. State officials said that the working groups regularly evaluate reports concerning proliferation-related activities and determine an appropriate response to impede activities of concern. As part of this review process, these groups identify transactions that may be sanctionable under various nonproliferation sanction authorities, including those related to North Korea. According to State and other working group officials, the interagency review process relies on criteria defined in the laws and EOs when assessing a transaction for the potential application of those sanctions. State officials also said the groups do not pursue sanctions for a target if they determine available information does not provide a basis for applying sanctions or is not legally sufficient. Officials in each agency said that they follow an evidence-based process to gain inter- and intra-agency consensus on imposing sanctions. At Treasury, Office of Foreign Assets Control officials said that they create an evidentiary record that contains the information they have gathered on a targeted person to present sufficient evidence that the person has engaged in sanctionable activity. The record contains identifying information such as date of birth, place of birth, or passport information, or if the targeted person is a company, the identifying information might be an address or telephone number. After the Office of Foreign Assets Control has approved this document, it is further reviewed for legal sufficiency by the Department of Justice, Department of State, and other relevant agencies. At State, the Offices of Counterproliferation Initiatives and Missile, Biological and Chemical Nonproliferation draft a statement of facts that provides a summary of intelligence available on a targeted transaction. Concurrently, State drafts a policy memo that explains the legal justification for the case. State circulates these documents internally and obtains advice from appropriate agencies and, in the case of actions targeted under EO 13382, consults with Treasury’s Office of Foreign Assets Control. Officials from the Offices of Counterproliferation Initiatives and Missile, Biological and Chemical Nonproliferation also said they circulate a decision memorandum to relevant stakeholders for approval. Officials at State and Treasury also told us that their process includes steps for making and announcing final sanctions determinations. At Treasury, the Office of Foreign Assets Control makes the final determination. Officials then publicize the sanctions in the Federal Register. At State, once the stakeholders have cleared the memorandum, the Offices of Counterproliferation Initiatives and Missile, Biological and Chemical Nonproliferation forward it to the Secretary of State or his or her designee for a final sanctions determination. They then prepare a report on imposed sanctions for publication in the Federal Register. When State or Treasury makes a determination that results in blocked assets, Treasury places the sanctioned person on the Specially Designated Nationals and Blocked Persons (SDN) list indicating that the person’s assets are blocked. Pursuant to regulation, U.S. persons, including banks, are required to block any assets of such persons that are in their possession or that come within their possession.consequence of the blocking, U.S. persons are generally prohibited from engaging in activities with the property or interests in property of persons As a on the SDN list. U.S. citizens are generally prohibited from doing business with individuals and persons on the SDN list.noted that persons’ status on this list does not expire, but persons may apply to be taken off the list. However, no North Korean person has asked for his or her name to be removed. Since 2006, the United States has imposed sanctions on 86 North Korean persons under five EOs, INKSNA, and Missile Sanctions laws (see table 3). The most frequently used EO during this time period was EO 13382, which, as noted above, is not specific to North Korea. Treasury imposed the most recent sanctions on North Korean persons in January 2015, in response to North Korea’s cyberattacks on Sony Pictures. In response, Treasury placed 10 North Korean individuals on the SDN list, and updated information about 3 persons on the list. State and Treasury have used EO 13382 most frequently—43 times in 10 years—to impose sanctions on North Korean persons that they found had engaged in activities related to WMD proliferation. For example, in March 2013, Treasury used EO 13382 to designate the following for sanctions: North Korea’s primary foreign exchange bank, which facilitated millions of dollars in transactions that benefited North Korean arms dealing. The chairman of the North Korean committee that oversees the production of North Korea’s ballistic missiles. Three North Korean government officials who were connected with North Korea’s nuclear and ballistic weapons production. According to the Federal Register notice, the United States imposed sanctions on these persons because State determined that they “engaged, or attempted to engage, in activities or transactions that have materially contributed to, or pose a risk of materially contributing to, the proliferation of WMD or their means of delivery (including missiles capable of delivering such weapons), including any efforts to manufacture, acquire, possess, develop, transport, transfer or use such items, by any person or foreign country of proliferation concern.” Commerce’s Bureau of Industry and Security requires those exporters who wish to ship items to North Korea to obtain a license for dual-use items that are subject to the Export Administration Regulations. Dual- use items are goods and technology that are designed for commercial use but could have military applications, such as computers and telecommunications equipment. In general, the Bureau of Industry and Security reviews applications for items requiring a license for export or reexport to North Korea and approves or denies applications on a case- by-case basis. According to the Bureau of Industry and Security, it will deny a license for luxury goods or any item that could contribute to North Korea’s nuclear-related, ballistic missile–related, or other WMD-related programs. Commerce officials informed us that they receive relatively few requests for licenses to export items to North Korea and in most of these cases Commerce issues a license because most of the applications are for humanitarian purposes. In 2014, the Bureau of Industry and Security approved licenses for items such as telecommunications equipment and medical devices, as well as water well–drilling equipment and volcanic seismic measuring instruments. Commerce does not require a license to export some items, such as food and medicine, to North Korea. Commerce officials informed us that, under the Export Administration Regulations, the Bureau of Industry and Security, in consultation with the Departments of Defense and State, will generally approve applications to export or reexport humanitarian items, such as blankets, basic footwear, and other items meeting subsistence needs that are intended for the benefit of the North Korean people. For example, it will approve items in support of UN humanitarian efforts, and agricultural commodities or medical devices that the Bureau of Industry and Security determines are not luxury goods. While UN sanctions have a broader reach than U.S. sanctions because all UN member states are obligated to implement and enforce them, the UN does not know the extent to which members are actually implementing its sanctions. The UN process for imposing sanctions on North Korea or related persons relies on a Security Council committee and a UN panel of experts that investigates suspected violations of North Korea sanctions and recommends actions to the UN. The panel has found North Korean persons using illicit techniques to evade sanctions and trade in arms and related material and has designated 32 North Korean or related entities for sanctioning since 2006, including a North Korean company found to be shipping armaments from Cuba to North Korea. However, while the UN calls upon member states to submit reports describing the steps or measures they have taken to implement effectively specified sanctions provisions, fewer than half have done so. According to UN and U.S. officials, many member states lack the technical capacity to develop the reports and implement sanctions. Member state delegates to the UN Security Council and U.S. officials agree that the lack of reports from all member states is an impediment to UN sanctions implementation. Member state delegates to the UN Security Council informed us that the UN has established a process to determine when and if to impose sanctions on persons that have violated the provisions of UNSCRs. The process involves the Security Council committee established pursuant to Security Council Resolution 1718 that oversees UN sanctions on North Korea; the Panel of Experts, which reviews information on violations of North Korea sanctions sent by member states and conducts investigations based on requests from the committee; and member states whose role is to implement sanctions on North Korea as required by various UN Security Council resolutions. (See fig. 3.) The UN established the committee in 2006. It consists of 15 members, including the 5 permanent members of the United Nations Security Council and 10 nonpermanent members. The committee makes all decisions by consensus and is mandated to seek information from member states regarding their actions to implement the measures imposed by UNSCR 1718. It is also mandated to examine and take action on information regarding alleged sanctions violations, consider and decide upon requests for exemptions, determine additional items to be added to the list of sanctioned goods, designate individuals and entities for sanctions, promulgate guidelines to facilitate the implementation of sanctions measures, and report at least every 90 days to the UN Security Council on its work overseeing sanctions measures set out in United Nations Security Council resolution 1718 on North Korea. The Panel of Experts was established in 2009 as a technical body within the committee. Pursuant to UNSCR 1874, the panel is tasked with, among other things, gathering, examining, and analyzing information regarding incidents of noncompliance with United Nations Security Council sanctions on North Korea. The panel was originally created for a 1-year period, but the Security Council extended the panel’s mandate in subsequent resolutions. The panel acts under the committee’s direction to implement its mandate to gather, examine, and analyze information from member states, relevant UN bodies, and other interested parties regarding North Korea sanctions implementation. The panel does not have enforcement authority and relies on the cooperation of member states to provide information that helps it with its investigations. The panel consists of eight subject matter experts from UN member states, including representatives from the council’s 5 permanent members. The Secretary General appoints panel members, who currently are from China, France, Japan, Russia, South Africa, South Korea, the United Kingdom, and the United States. According to the UN, these subject matter experts specialize in technical areas such as WMD arms control and nonproliferation policy, customs and export controls, finance, missile technology, maritime transport, and nuclear issues. According to a representative of the committee, panel members are not intended to represent their countries, but to be independent in order to provide objective assessments. According to UN guidance, the panel reviews public information and conducts investigative work on incidents or events, and consults foreign governments and seeks information beyond what member states provide them. Representatives of the U.S. Mission to the United Nations (USUN) informed us that the United States and other countries provide the panel with information to help facilitate investigations. The UN Security Council encourages UN member states to respond promptly and thoroughly to the panel’s requests for information and to invite panel members to visit and investigate alleged violations of the sanctions regime, including inspection of items that might have been seized by national authorities. Following investigations of suspected sanctions violations, the panel submits investigative reports (incident reports) to the committee detailing its findings and recommendations on how to proceed, according to UN guidance. The panel treats its incident reports as confidential and provides access only to committee and Security Council members. According to a representative of the committee, the committee considers the violations and recommendations and makes sanctions designations based on the consensus of committee members. According to a representative of the committee, if the committee does not reach consensus, it can refer the case to the UN Security Council, pending member agreement Ultimately, the UN Security Council determines whether or not recommended designations meet the criteria for sanctions, according to a representative of the committee. If the decision is affirmative, it takes action by making sanctions designations mostly through new resolutions. This process has resulted in 32 designations since 2006. All but one of these designations were made through new resolutions, according to a USUN official. For example, the committee designated the Ocean Maritime Management Company for sanctions through the committee process in July 2014. The panel is generally required, with each extension of its mandate, to provide the committee with an interim and final report, including findings and recommendations. The panel’s final reports have identified North Korea’s use of evasive techniques to export weapons. The panel’s 2014 final report described North Korea’s attempt to illicitly transport arms and related materiel from Cuba to North Korea concealed underneath thousands of bags of sugar onboard the North Korean vessel Chong Chon Gang. North Korea’s use of evasive techniques in this case was blocked by actions taken by Panama, a UN member state. Panamanian authorities stopped and examined the Chong Chon Gang vessel as it passed through Panama’s jurisdiction. After uncovering items on the vessel that it believed to be arms and related materiel, Panama alerted the committee of the possible UN sanctions violation. According to representatives of the committee, Panama cooperated with the panel as it conducted its investigation. The panel concluded that the shipment was in violation of UN sanctions and that it constituted the largest amount of arms and related materiel interdicted to North Korea since the adoption of UNSCR 1718. The committee placed the shipping company that operated the Chong Chon Gang on its sanctioned entities list. The panel’s investigations have also uncovered evidence of North Korea’s efforts to evade sanctions by routing financial transactions in support of North Korea’s procurement of sanctioned goods through intermediaries, including those in China, Malaysia, Singapore, and Thailand. For instance, in its investigation of the Chong Chon Gang case, the panel found that the vessel operator, North Korea’s Ocean Maritime Management Company, Limited, used foreign intermediaries in Hong Kong, Thailand, and Singapore to conduct financial transactions on its behalf. The panel also identified that in most cases the investigated transactions were made in United States dollars from foreign-based banks and transferred through corresponding bank accounts in the United States. The panel’s 2015 final report indicated that North Korea has successfully bypassed banking organizations’ due diligence processes by initiating transactions through other entities on its behalf. The panel expressed concern in its report regarding the ability of banks in countries with less effective banking regulations or compliance institutions to detect and prevent illicit transfers involving North Korea. The panel’s reports also reveal the essential role played by member states in implementing UN sanctions and that some member states have not been as well informed as others in working with the panel regarding sanctions implementation. For example, the panel discovered that the Ugandan government had contracted with North Korea to provide police force training. Ugandan government officials purported that they did not realize that UN sanctions prohibited this type of activity, according to a USUN official. The UN recognized the essential role that member states play when it called upon member states to submit reports on measures or steps taken to implement effectively provisions of specified Security Council resolutions to the committee within 45 or 90 days, or upon request by the committee, of the UN’s adoption of North Korea sanctions measures. UNSCRs 1718, 1874, and 2094, adopted in 2006, 2009, and 2013 respectively, call upon member states to report on the concrete measures they have taken in order to effectively implement the specified provisions For instance, a member state might report on how its of the resolutions.national export control regulations address newly adopted UN sanctions on North Korea. The United States has complied with UN reporting provisions calling on member states to submit implementation reports. U.S. implementation reports can be viewed on the committee’s website, at http://www.un.org/sc/committees/1718/mstatesreports.shtml. submitted one or more reports include member states with major international transit points (such as the United Arab Emirates) or that have reportedly been used by North Korea as a foreign intermediary (such as Thailand). The panel has expressed concern in its 2015 final report that 8 years after the adoption of UNSCR 1718, in 2006, a consistently high proportion of member states in some regions have not reported at all on the status of their implementation. It has also reported that some member states have submitted reports that lack detailed information, or were late, impeding the panel’s ability to examine and analyze information about national implementation. The panel has also reported that member states should improve their reporting of incidents of noncompliance with sanctions resolutions and inspections of North Korean cargo. Appendix III provides information on the status of member state implementation report submissions. U.S. officials and representatives of the committee agree that the lack of detailed reports from all member states is an impediment to the UN’s effective implementation of its sanctions. Through reviewing these reports, the committee uncovers gaps in member state sanctions implementation which helps the committee identify targets for outreach. The panel notes that the lack of detailed information in implementation reports impedes its ability to examine and analyze information regarding member state implementation and its challenges. It also states that member state underreporting increases North Korea’s opportunities to continue its prohibited activities. The panel will not have the information it needs to completely understand North Korea’s evasive techniques if it does not have the full cooperation of member states. U.S. officials and representatives of the committee told us that many member states lack the technical capacity to enforce sanctions and prepare reports. For instance, representatives of the committee told us that some member states may have weak customs and border patrol systems or export control regulatory structures because of the high resource requirements of these programs. In addition, representatives of the committee stated that some member states may lack awareness of the full scope of North Korea sanctions or may not understand how to implement the sanctions. Moreover, some countries may not make the sanctions a high priority because they believe they are not directly affected by North Korea. In addition, member states that are geographically distant from North Korea or lack a diplomatic or trade relationship with it may not see the need to implement the sanctions, according to representatives of the committee. The UN has taken some steps to address this impediment. The committee and the panel provide limited assistance to member states upon request in preparing and submitting reports. For example, the committee has developed and issued a checklist template that helps member states indicate the measures, procedures, legislation, and regulations or policies that have been adopted to address various UNSCR measures relevant to member states’ national implementation reports. A committee member indicated that the committee developed a list of 25 to 30 member states where outreach would most likely have an impact on reporting outcomes. The panel reported in its 2015 final report that it sent 95 reminder letters to the member states that have not submitted implementation reports, emphasizing the importance of submitting reports and that the panel is available to provide assistance. Despite the steps the UN has taken to help member states adhere to reporting provisions, the panel’s 2015 report continues to identify the lack of member states’ reports as an impediment. The panel stated that it is incumbent on member states to implement the measures in the UN Security Council resolutions more robustly in order to counter North Korea’s continued violations, and that while the resolutions provide member states with tools to curb the prohibited activities of North Korea, they are effective only when implemented. State Department officials informed us that the United States has offered technical assistance to some member states for preventing proliferation and implementing sanctions. However, they were unable to determine the extent to which the United States has provided specific assistance aimed at ensuring that member states provide the UN with the implementation reports it needs to assess member state implementation of UN sanctions on North Korea. North Korea’s actions pose threats to the security of the United States and other UN members. Both the United States and the UN face impediments to implementing the sanctions they have imposed in response to these actions. While the United States has recently taken steps to provide more flexibility to impose sanctions, and thereby possibly impose more sanctions on North Korean persons, the United Nations is seeking to address the challenge posed by many UN member states not providing the UN with implementation information. According to U.S. officials, many member states require additional technical assistance to develop the implementation reports needed by the panel. The lack of implementation reports from member states impedes the panel’s ability to examine and analyze information about member state implementation of North Korea sanctions. GAO recommends the Secretary of State work with the UN Security Council to ensure that member states receive technical assistance to help prepare and submit reports on their implementation of UN sanctions on North Korea. We provided a draft of this report to the Departments of State, Treasury, and Commerce for comment. In its written comments, reproduced in Appendix IV, State concurred with our recommendation. Treasury and Commerce declined to provide written comments. State, Treasury, and Commerce provided technical comments, which were incorporated into the draft as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretaries of State, Treasury, and Commerce, the U.S. Ambassador to the United Nations, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9601 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. The United States and the United Nations (UN) Security Council have imposed a wide range of sanctions against North Korea and Iran as part of their broader efforts to prevent the proliferation of weapons of mass destruction. Table 4 compares the major activities targeted by U.S. and UN sanctions on those countries. Officials from the Department of State, the Department of the Treasury, and other sources identified the following factors that may influence the types of sanctions imposed by the United States and the UN on these countries. Different political systems. North Korea is an isolated society that is under the exclusive rule of a dictator who controls all aspects of the North Korean political system, including the legislative and judicial processes. Though Iran operates under a theocratic political system, with a religious leader serving as its chief of state, Iranian citizens participate in popular elections for president and members of its legislative branch. Different economic systems. North Korea has a centrally planned economy generally isolated from the rest of the world. It exports most of its basic commodities to China, its closest ally. Iran, as a major exporter of oil and petrochemical products, has several major trade partners, including China, India, Turkey, South Korea, and Japan. Different social environments. North Korea’s dictatorship tightly controls the activities of its citizens by restricting travel; prohibiting access to the Internet; and controlling all forms of media, communication, and political expression. In contrast, Iranian citizens travel abroad relatively freely, communicate with one another and the world through the Internet and social media, and can hold political protests and demonstrations. This report (1) identifies the activities that are targeted by U.S. and United Nations (UN) sanctions specific to North Korea, (2) describes how the United States implements its sanctions specific to North Korea and examines the challenges it faces in doing so, and (3) describes how the UN implements its sanctions specific to North Korea and examines the challenges it faces in doing so. In appendix I, we compare U.S. and UN North Korea–specific sanctions with those specific to Iran. (See app. I.) To address our first objective, we reviewed U.S. executive orders and laws and UN Security Council resolutions issued from 2006 to 2015 with sanctions related to North Korea. We also interviewed officials from the Department of State (State), the Department of the Treasury (Treasury), and the UN to confirm the universe of North Korea–specific sanctions. We also interviewed these officials to determine any other executive orders, laws, or resolutions not specific to North Korea that they have used to impose sanctions on North Korea during this time period. We then analyzed the executive orders, laws, and resolutions to identify the activities targeted by the sanctions. To address our second objective, we interviewed State and Treasury officials to determine the process that each agency follows to impose sanctions on North Korea and related persons. We also spoke with State, Treasury and Commerce officials to identify the challenges that U.S. agencies face in implementing sanctions related to North Korea. We interviewed Department of Commerce (Commerce) officials to learn about how the U.S. government controls exports to North Korea. We analyzed documents and information from State and Treasury to determine the number of North Korean entities that have been sanctioned since 2006. To address our third objective, we reviewed UN documents and interviewed UN officials to determine the process that the UN uses to impose sanctions on North Korea and related entities. We reviewed United Nations security resolutions relevant to North Korea, 1718 Committee guidelines and reports, and Panel of Expert guidelines and reports. We interviewed relevant officials at the U.S. State Department and traveled to New York to visit UN headquarters and interview officials from the U.S. Mission to the United Nations and members of the UN 1718 Committee. We interviewed two former members of the Panel of Experts to obtain their views on the UN process for making North Korea sanctions determinations. We also reviewed the 1718 Committee’s sanctions list to determine the number of designations the UN has made on North Korean or related entities and the reasons for designating. For examples of how the Panel of Experts has investigated cases of sanctions violations and worked with member states through the investigation process, particularly related to the Cong Chon Gang case, we reviewed the panel’s final reports summarizing its investigation findings and interviewed members of the 1718 Committee involved in conducting the investigation. To determine the extent to which member states are submitting reports on their implementation of UN sanctions on North Korea, we examined the 1718 Committee’s record of member state implementation reports and interviewed 1718 Committee members. To identify the challenges the UN faces related to member state reporting and the efforts the UN has taken to help member states meet reporting provisions of the UN Security Council resolutions (UNSCR), we interviewed U.S. and UN officials, and reviewed 1718 Committee and Panel of Expert reports and documents. To examine the efforts the UN has taken to address the lack of member state reporting, we interviewed members of the UN 1718 Committee and reviewed documents outlining UN outreach efforts. To compare U.S. and UN sanctions specific to North Korea and Iran (see app. I), we reviewed U.S. executive orders, laws, and UN Security Council resolutions with sanctions specific to North Korea and Iran. We analyzed these documents to identify the activities targeted by the sanctions. On the basis of a comprehensive literature review, we developed a list of targeted activities frequently identified in relation to North Korea and Iran sanctions and grouped these activities into high- level categories. To ensure data reliability in categorizing the targeted activities into high-level categories, we conducted a double-blind exercise whereby each member of our team reviewed the activities identified within the U.S. executive orders and laws and UN resolutions for each country and assigned each activity to a high-level category, such as financial transactions with targeted persons. We then compared the results, discussed any differences and reconciled our responses to reach consensus, and developed a matrix to compare the targeted activities for North Korea sanctions with those of Iran sanctions. We interviewed State and Treasury officials to discuss the differences in activities targeted by North Korea and Iran sanctions. To develop appendix III, on United Nations member state implementation report submissions, we examined the UN 1718 Committee’s website record of member state implementation reports. The record of member state implementation reports allowed us to determine the number of member states that have either reported or not reported. Bolivia (Plurinational State of) Micronesia (Federated States of) Venezuela (Bolivarian Republic of) In addition to the contact named above, Pierre Toureille (Assistant Director), Leah DeWolf, Christina Bruff, Mason Thorpe Calhoun, Tina Cheng, Karen Deans, Justin Fisher, Toni Gillich, Michael Hoffman, and Grace Lui made key contributions to this report.
North Korea is a closely controlled society, and its regime has taken actions that threaten the United States and other United Nations member states. North Korean tests of nuclear weapons and ballistic missiles have prompted the United States and the UN to impose sanctions on North Korea. GAO was asked to review U.S. and UN sanctions on North Korea. This report (1) identifies the activities that are targeted by U.S. and UN sanctions specific to North Korea, (2) describes how the United States implements its sanctions specific to North Korea and examines the challenges it faces in doing so, and (3) describes how the UN implements its sanctions specific to North Korea and examines the challenges it faces in doing so. To answer these questions, GAO analyzed documents from the Departments of State, Treasury, and Commerce, and the UN. GAO also interviewed officials from the Departments of State, Treasury, and Commerce, and the UN. U.S. executive orders (EO) and the Iran, North Korea, and Syria Nonproliferation Act target activities for the imposition of sanctions that include North Korean (Democratic People's Republic of Korea) proliferation of weapons of mass destruction and transferring of luxury goods. The EOs and the act allow the United States to respond by imposing sanctions, such as blocking the assets of persons involved in these activities. United Nations (UN) Security Council resolutions target similar North Korean activities, and under the UN Charter, all 193 UN member states are required to implement sanctions on persons involved in them. U.S. officials informed GAO that obtaining information on North Korean persons has hindered the U.S. interagency process for imposing sanctions, and that EO 13687, announced in January 2015, provided them with greater flexibility to sanction persons based on their status as government officials rather than evidence of specific conduct. State and Treasury impose sanctions following an interagency process that involves: reviewing intelligence and other information to develop evidence needed to meet standards set by U.S. laws and EOs, vetting possible actions within the U.S. government, determining whether to sanction, and announcing sanctions decisions. Since 2006, the United States has imposed sanctions on 86 North Korean persons, including on 13 North Korean government persons under EO 13687. Although UN sanctions have a broader reach than U.S. sanctions, the UN lacks reports from many member states describing the steps or measures they have taken to implement specified sanctions provisions. The UN process for imposing sanctions relies on a UN Security Council committee and a UN panel of experts that investigates suspected sanctions violations and recommends actions to the UN. The Panel of Experts investigations have resulted in 32 designations of North Korean or related entities for sanctions since 2006, including a company found to be shipping armaments from Cuba in 2013. While the UN calls upon all member states to submit reports detailing plans for implementing specified sanctions provisions, fewer than half have done so because of a range of factors including a lack of technical capacity. The committee uses the reports to uncover gaps in sanctions implementation and identify member states that require additional outreach. The United States as a member state has submitted all of these reports. UN and U.S. officials agree that the lack of reports from all member states is an impediment to the UN's implementation of its sanctions. Source: GAO | GAO-15-485 GAO recommends the Secretary of State work with the UN Security Council to ensure that member states receive technical assistance to help prepare and submit reports on their implementation of UN sanctions on North Korea. The Department of State concurred with this recommendation.
Thousands of market participants are involved in trading stocks, options, government bonds, and other financial products in the United States. These participants include exchanges at which orders to buy and sell are executed, broker-dealers who present those orders on behalf of their customers, clearing organizations that ensure that ownership is transferred, and banks that process payments for securities transactions. Although many organizations are active in the financial markets, some organizations, such as the major exchanges, clearing firms, and large broker-dealers are more important for the overall market’s ability to function because they offer unique products or perform vital services. The participants in these markets are overseen by various federal securities and banking regulators whose regulatory missions vary. Financial markets also rely heavily on information technology systems and extensive and sophisticated communications networks. As a result, physical and electronic security measures and business continuity planning are critical to maintaining and restoring operations in the event of a disaster or attack. Customer orders for stocks and options, including those from individual investors and from institutions such as mutual funds, are usually executed at one of the many exchanges located around the United States. Currently, stocks are traded on at least eight exchanges, including the New York Stock Exchange (NYSE), the American Stock Exchange, and the NASDAQ. Securities options are traded at five exchanges, including the Chicago Board Options Exchange and the Pacific Stock Exchange. Trading on the stock exchanges usually begins when customers’ orders are routed to the exchange floor either by telephone or through electronic systems to specialist brokers. These brokers facilitate trading in specific stocks by matching orders to buy and sell. For stocks traded on NASDAQ, customers’ orders are routed for execution to the various brokers who act as market makers by posting price quotes at which they are willing to buy or sell particular securities on that market’s electronic quotation system. Some stocks traded on NASDAQ can be quoted by just a single broker making a market for that security, but others have hundreds of brokers acting as market makers in a particular security by buying and selling shares from their own inventories. Orders for options are often executed on the floors of an exchange in an open-outcry pit in which the representatives of sometimes hundreds of brokers buy and sell options contracts on behalf of their customers. The orders executed on the various markets usually come from broker- dealers. Individual and institutional investors open accounts with these firms and, for a per-transaction commission or an annual fee, the broker- dealer buys and sells stocks, bonds, options, and other securities on the customers’ behalf. Employees of these firms may provide specific investment advice or develop investment plans for investors. Although some firms only offer brokerage services and route customer orders to other firms or exchanges for execution, some also act as dealers and fill customer orders to buy or sell shares from their own inventory. In addition to the exchanges, customers’ orders can also be executed on electronic communications networks (ECN), which match their customers’ buy and sell orders to those submitted by their other customers. The various ECNs specialize in providing different services to their customers such as rapid executions or anonymous trading for large orders. After a securities trade is executed, the ownership of the security must be transferred and payment must be exchanged between the buyer and the seller. This process is known as clearance and settlement. Figure 1 illustrates the clearance and settlement process and the various participants, including broker-dealers, the clearing organization for stocks (the National Securities Clearing Corporation or NSCC), and the Depository Trust Company (which maintains records of ownership for the bulk of the securities traded in the United States). The Options Clearing Corporation plays a similar role in clearing and settling securities options transactions. After options trades are executed, the broker-dealers on either side of the trade compare trade details with each other, and the clearing organization and payments are exchanged on T+1. Banks also participate in U.S. securities markets in various ways. Some banks act as clearing banks by maintaining accounts for broker-dealers and accepting and making payments for these firms. Some banks also act as custodians of securities by maintaining custody of securities owned by other financial institutions or individuals. The market for the U.S. government securities issued by the Department of the Treasury (Treasury) is one of the largest markets in the world. These securities include Treasury bills, notes, and bonds of varying maturities. Trading in government securities does not take place on organized exchanges. Instead, these securities are traded in an “over-the-counter” market and are carried out by telephone calls between buying and selling dealers. To facilitate this trading, a small number of specialized firms, known as inter-dealer brokers (IDB) act as intermediaries and arrange trades in Treasury securities between other broker-dealers. The use of the IDBs allows other broker-dealers to maintain anonymity in their trading activity, which reduces the likelihood that they will obtain disadvantageous prices when buying or selling large amounts of securities. Trades between the IDBs and other broker-dealers are submitted for clearance and settled at the Government Securities Clearing Corporation (GSCC). After trade details are compared on the night of the trade date, GSCC provides settlement instructions to the broker-dealers and their clearing banks. Settlement with these banks and the clearing organization’s bank typically occurs one business day after the trade (T+1) with ownership of securities bought and sold transferred either on the books of clearing banks or the books of the Federal Reserve through its Fedwire Securities Transfer System. Two banks, JPMorgan Chase and the Bank of New York, provide clearing and settlement services for many major broker- dealers in the government securities market. Many of the same participants in the government securities markets are also active in the markets for money market instruments. These are short- term instruments that include federal funds, foreign exchange transactions, and commercial paper. Commercial paper issuances are debt obligations issued by banks, corporations, and other borrowers to obtain financing for 1 to 270 days. Another type of money market instrument widely used for short-term financing is the repurchase agreement or repo, in which a party seeking financing sells securities, typically government securities, to another party while simultaneously agreeing to buy them back at a future date, such as overnight or some other set term. The seller obtains the use of the funds exchanged for the securities, and the buyer earns a return on their funds when the securities are repurchased at a higher price than originally sold. Active participants in the repo market include the Federal Reserve, which uses repos in the conduct of monetary policy, and large holders of government securities, such as foreign central banks or pension funds, which use repos to obtain additional investment income. Broker-dealers are active users of repos for financing their daily operations. To facilitate this market, the IDBs often match buyers and sellers of repos; and the funds involved are exchanged between the government securities clearing organization and the clearing banks of market participants. According to data reported by the Federal Reserve, repo transactions valued at over $1 trillion occur daily in the United States. Payments for corporate and government securities transactions, as well as for business and consumer transactions, are transferred by payment system processors. One of these processors is the Federal Reserve, which owns and operates the Fedwire Funds Transfer System. Fedwire connects 9,500 depository institutions and electronically transfers large dollar value payments associated with financial market and other commercial activities in the United States. Fedwire is generally the system used to transfer payments for securities between the banks used by the clearing organization and market participants. Another large dollar transfer system is the Clearing House Inter-bank Payments System (CHIPS). CHIPS is a system for payment transfers, particularly for those U.S. dollar payments relating to foreign exchange and other transactions between banks in the United States and in other countries. Although thousands of entities are active in the U.S. securities markets, certain key participants are critical to the ability of the markets to function. Although multiple markets exist for trading stocks or stock options, some are more important than others as a result of the products they offer or the functions they perform. For example, an exchange that attracts the greatest trading volume may act as a price setter for the securities it offers, and the prices for trades that occur on that exchange are then used as the basis for trades in other markets that offer those same securities. On June 8, 2001, when a software malfunction halted trading on NYSE, the regional exchanges also suspended trading although their systems were not affected. Other market participants are critical to overall market functioning because they consolidate and distribute price quotations or information on executed trades. Markets also cannot function without the activities performed by the clearing organizations; and in some cases, only one clearing organization exists for particular products. In contrast, disruptions at other participants may have less severe impacts on the ability of the markets to function. For example, many of the options traded on the Chicago Board Options Exchange are also traded on other U.S. options markets. Thus if this exchange was not operational, investors would still be able to trade these options on the other markets, although certain proprietary products, such as options on selected indexes, might be unavailable temporarily. Other participants may be critical to the overall functioning of the markets only in the aggregate. Investors can choose to use any one of thousands of broker-dealers registered in the United States. If one of these firms is unable to operate, its customers may be inconvenienced or unable to trade, but the impact on the markets as a whole may just be a lower level of liquidity or reduced price competitiveness. But a small number of large broker-dealers account for sizeable portions of the daily trading volume on many exchanges and if several of these large firms are unable to operate, the markets might not have sufficient trading volume to function in an orderly or fair way. Several federal organizations oversee the various securities market participants. The Securities and Exchange Commission (SEC) regulates the stock and options exchanges and the clearing organizations for those products. In addition, SEC regulates the broker-dealers that trade on these markets and other participants, such as mutual funds, which are active investors. The exchanges also have responsibilities as self-regulatory organizations (SRO) for ensuring that their participants comply with the securities laws and the exchanges’ own rules. SEC or one of the depository institution regulators oversees participants in the government securities market, but Treasury also plays a role. Treasury issues rules pertaining to that market, but SEC or the bank regulators are responsible for conducting examinations to ensure that these rules are followed. Several federal organizations have regulatory responsibilities over banks and other depository institutions, including those active in the securities markets. The Federal Reserve oversees bank holding companies and state- chartered banks that are members of the Federal Reserve System. The Office of the Comptroller of the Currency (OCC) examines nationally chartered banks. Securities and banking regulators have different regulatory missions and focus on different aspects of the operations of the entities they oversee. Because banks accept customer deposits and use those funds to lend to borrowers, banking regulators focus on the financial soundness of these institutions to reduce the likelihood that customers will lose their deposits. Poor economic conditions or bank mismanagement have periodically led to extensive bank failures and customer losses in the United States. As a result, banking and the other depository institution regulators issue guidance and conduct examinations over a wide range of financial and operational issues pertaining to these institutions, such as what information security steps these institutions have taken to minimize unauthorized access to their systems and what business continuity capabilities they have. In contrast, securities regulators have a different mission and focus on other aspects of the operations of the entities they oversee. Securities regulation in the United States arose with the goal of protecting investors from abusive practices and ensuring that they were treated fairly. To achieve this, SEC and the exchanges, which act as self regulatory organizations (SRO) to oversee their broker-dealer members, focus primarily on monitoring securities market participants to ensure that the securities laws are not being violated; for example, restricting insider trading or requiring companies issuing securities to completely and accurately disclose their financial condition. As a result, few securities regulations specifically address exchange and broker-dealer operational issues, and securities regulators have largely considered the conduct of such operations to be left to the business decisions of these organizations. Information technology and telecommunications are vital to the securities markets and the banking system. Exchanges and markets rely on information systems to match orders to buy and sell securities for millions of trades. They also use such systems to instantaneously report trade details to market participants in the United States and around the world. Information systems also compile and compare trading activity and determine all participants’ settlement obligations. The information exchanged by these information systems is transmitted over various types of telecommunications technology, including fiber optic cable. Broker-dealers also make extensive use of information technology and communications systems. These firms connect not only to the networks of the exchanges and clearing organizations but may also be connected to the thousands of information systems or communications networks operated by their customers, other broker-dealers, banks, and market data vendors. Despite widespread use of information technology to transmit data, securities market participants are also heavily dependent on voice communications. Broker-dealers still use telephones to receive, place, and confirm orders. Voice or data lines transmit the information for the system that provides instructions for personnel on exchange floors. Fedwire and CHIPS also rely heavily on information technology and communications networks to process payments. Fedwire’s larger bank customers have permanent network connections to computers at each of Fedwire’s data centers, but smaller banks connect via dial-up modem. CHIPS uses fiber- optic networks and mainframe computers to transfer funds among its 54 member banks. Because financial market participants’ operations could be disrupted by damage to their facilities, systems, or networks, they often invest in physical and information security protection and develop business continuity capabilities to ensure they can recover from such damage. To reduce the risk that facilities and personnel would be harmed by individuals or groups attempting unauthorized entry, sabotage, or other criminal acts, market participants invest in physical security measures such as guards or video monitoring systems. Market participants also invest in information security measures such as firewalls, which reduce the risk of damage from threats such as hackers or computer viruses. Finally, participants invest in business continuity capabilities, such as backup locations, that can further reduce the risk that damage to primary facilities will disrupt an organization’s ability to continue operating. To describe the impact of the September 11, 2001, attacks on the financial markets and the extent to which organizations had been prepared for such events, we reviewed studies of the attacks’ impact by regulators and private organizations. We also obtained documents and interviewed staff from over 30 exchanges, clearing organizations, broker-dealers, banks, and payment system processors, including organizations located in the vicinity of the attacks and elsewhere. We toured damaged facilities and discussed the attacks’ impact on telecommunications and power infrastructure with three telecommunications providers (Verizon, AT&T, and WorldCom) and Con Edison, a power provider. Finally, we discussed the actions taken to stabilize the markets and facilitate their reopening with financial market regulators. To determine how financial market organizations were attempting to reduce the risk that their operations could be disrupted, we selected 15 major financial market organizations that included many of the most active participants, including 7 stock and options exchanges, 3 clearing and securities processing organizations, 3 ECNs, and 2 payment system processors. For purposes of our analysis, we also categorized these organizations into two groups: seven whose ability to operate is critical to the overall functioning of the financial markets and eight for whom disruptions in their operations would have a less severe impact on the overall markets. We made these categorizations by determining whether viable immediate substitutes existed for the products or services the organizations offer or whether the functions they perform were critical to the overall markets' ability to function. To maintain the organizations’ security and the confidentiality of proprietary information, we agreed with these organizations that we would not discuss how they were affected by the attacks or how they were addressing their risks through physical and information security and business continuity efforts in a way that could identify them. However, to the extent that information about these organizations is already publicly known, we sometimes name them in the report. To determine what steps these 15 organizations were taking to reduce the risks to their operations from physical attacks, we conducted on-site “walkthroughs” of these organizations’ primary facilities, reviewed their security policies and procedures, and met with key officials responsible for physical security to discuss these policies and procedures. We compared these policies and procedures to 52 standards developed by the Department of Justice for federal buildings. Based on these standards, we evaluated these organizations’ physical security efforts across several key operational elements, including measures taken to secure perimeters, entryways, and interior areas and whether organizations had conducted various security planning activities. To determine what steps these 15 organizations were taking to reduce the risks to their operations from electronic attacks, we reviewed the security policies of the organizations we visited and reviewed documentation of their system and network architectures and configurations. We also compared their information security measures to those recommended for federal organizations in the Federal Information System Controls Audit Manual (FISCAM). Using these standards, we attempted to determine through discussions and document reviews how these organizations had addressed various key operational elements for information security, including how they controlled access to their systems and detected intrusions, what responses they made when such intrusions occurred, and what assessments of their systems’ vulnerabilities they had performed. To determine what steps these 15 organizations had taken to ensure they could resume operations after an attack or other disaster, we discussed their business continuity plans (BCP) with staff and toured their primary facilities and the backup facilities they maintained. In addition, we reviewed their BCPs and assessed them against practices recommended for federal and private-sector organizations, including FISCAM, bank regulatory guidance, and the practices recommended by the Business Continuity Institute. Comparing these standards with the weaknesses revealed in some financial market participants’ recovery efforts after the September 2001 attacks, we determined how these organizations’ BCPs addressed several key operational elements. Among the operational elements we considered were the existence and capabilities of backup facilities, whether the organizations had procedures to ensure the availability of critical personnel and telecommunications, and whether they completely tested their plans. In evaluating these organizations’ backup facilities, we attempted to determine whether these organizations had backup facilities that would allow them to recover from damage to their primary sites or from damage or inaccessibility resulting from a wide-scale disaster. We also met with staff of several major banks and securities firms to discuss their efforts to improve BCPs. We also reviewed results of a survey by the NASD—which oversees broker-dealer members of NASDAQ—that reported on the business continuity capabilities of 120 of its largest members and a random selection of 150 of approximately 4,000 remaining members. To assess how the financial regulators were addressing physical security, electronic security, and business continuity planning at the financial institutions they oversee, we met with staff from SEC, the Federal Reserve, OCC, and representatives of the Federal Financial Institutions Examination Council. In addition, we met with NYSE and NASD staff responsible for overseeing their members’ compliance with the securities laws. At SEC, we also collected data on the examinations SEC had conducted of exchanges, clearing organizations, and ECNs since 1995 and reviewed the examiners’ work program and examination reports for the 10 examinations completed between July 2000 and August 2002. In addition, we reviewed selected SEC and NYSE examinations of broker-dealers. To determine how the financial markets were being addressed as part of the United States’ critical infrastructure protection efforts, we reviewed previously completed GAO work, met with staff from Treasury and representatives of the Financial and Banking Information Infrastructure Committee (FBIIC), which is undertaking efforts to ensure that critical assets in the financial sector are protected. We also discussed initiatives to improve responses to future crises and improve the resiliency of the financial sector and its critical telecommunications services with representatives of industry trade groups, including the Bond Market Association and the Securities Industry Association, as well as regulators, federal telecommunications officials, telecommunications providers, and financial market participants. The results of this work are presented in appendix II. We conducted our work in various U.S. cities from November 2001 to October 2002 in accordance with generally accepted government auditing standards. The terrorist attacks on September 11, 2001, resulted in significant loss of life and extensive property and other physical damage, including damage to the telecommunications and power infrastructure serving lower Manhattan. Because many financial market participants were concentrated in the area surrounding the World Trade Center, U.S. financial markets were severely disrupted. Several key broker-dealers experienced extensive damage, and the stock and options markets were closed for the longest period since the 1930s. The markets for government securities and money market instruments were also severely disrupted as several key participants in these markets were directly affected by the attacks. However, financial market participants, infrastructure providers, and regulators made tremendous efforts to successfully reopen these markets within days. Regulators also took various actions to facilitate the reopening of the markets, including granting temporary relief from regulatory reporting and other requirements and providing funds and issuing securities to ensure that financial institutions could fund their operations. The impact on the banking and payments systems was less severe, as the primary operations of most banks and payment systems processors were located outside of the area affected by the attacks, or because they had fully operational backup facilities in other locations. Although many factors affected the ability of the markets to resume operations, the attacks also revealed limitations in many participants’ BCPs for addressing such a widespread disaster. These factors included not having backup facilities that were sufficiently geographically dispersed or comprehensive enough to conduct all critical operations, unanticipated loss of telecommunications service, and difficulties in locating staff and transporting them to new facilities. On September 11, 2001, two commercial jet airplanes were hijacked by terrorists and flown into the twin towers of the World Trade Center. Within hours, the two towers completely collapsed, resulting in the loss of four other buildings that were part of the World Trade Center complex. As shown in figure 2, the attacks damaged numerous structures in lower Manhattan. The attacks caused extensive property damage. According to estimates by the Securities Industry Association, the total cost of the property damages ranges from $24 to $28 billion. According to one estimate, the damage to structures beyond the immediate World Trade Center area extended across 16 acres. The six World Trade Center buildings that were lost accounted for over 13 million square feet of office space, valued at $5.2 to $6.7 billion. One of these buildings was 7 World Trade Center, which was a 46-story office building directly to the west of the two towers. It sustained damage as a result of the attacks, burned for several hours, and collapsed around 5:00 p.m. on September 11, 2001. An additional nine buildings containing about 15 million square feet of office space were substantially damaged and were expected to require extensive and lengthy repair before they could be reoccupied. Sixteen buildings with about 10 million square feet of office space sustained relatively minor damage and will likely be completely reoccupied. Finally, another 400 buildings sustained damage primarily to facades and windows. A study by an insurance industry group estimated that the total claims for property, life, and other insurance would exceed $40 billion. In comparison, Hurricane Andrew of 1992 caused an estimated $15.5 billion in similar insurance claims. The loss of life following the attacks on the World Trade Center was also devastating with the official death toll for the September 11 attacks reaching 2,795, as of November 2002. Because of the concentration of financial market participants in the vicinity of the World Trade Center, a large percentage of those killed were financial firm employees. Excluding the 366 members of the police and fire departments and the persons on the airplanes, the financial industry’s loss represented over 74 percent of the total civilian casualties in the World Trade Center attacks. Four firms accounted for about a third of the civilian casualties, and 658 were employees of one firm—Cantor Fitzgerald, a key participant in the government securities markets. The loss of life also exacted a heavy psychological toll on staff that worked in the area, who both witnessed the tragedy and lost friends or family. Representatives of several organizations we met with told us that one of the difficulties in the aftermath of the attacks was addressing the psychological impact of the event on staff. As a result, individuals attempting to restore operations often had to do so under emotionally traumatic conditions. The dust and debris from the attacks and the subsequent collapse of the various World Trade Center structures covered an extensive area of lower Manhattan, up to a mile beyond the center of the attacks, as shown in figure 3. Figures 4 and 5 include various photographs that illustrate the damage to buildings from the towers’ collapse and from the dust and debris that blanketed the surrounding area. This dust and debris created serious environmental hazards that resulted in additional damage to other facilities and hampered firms’ ability to restore operations in the area. For example, firms with major data processing centers could not operate computer equipment until the dust levels had been substantially reduced because of the sensitivity of this equipment to dust contamination. In addition, dust and other hazardous materials made working conditions in the area difficult and hazardous. According to staff of one of the infrastructure providers with whom we met, the entire area near the World Trade Center was covered with a toxic dust that contained asbestos and other hazardous materials. Restrictions on physical access to lower Manhattan, put into place after the attacks, also complicated efforts to restore operations. To facilitate rescue and recovery efforts and maintain order, the mayor ordered an evacuation of lower Manhattan, and the New York City Office of Emergency Management restricted all pedestrian and vehicle access to most of this area from September 11 through September 13, 2001. During this time, access to the area was only granted to persons with the appropriate credentials. Federal and local law enforcement agencies also restricted access because of the potential for additional attacks and to facilitate investigations at the World Trade Center site. Figure 6 shows the areas with access restrictions in the days following the attacks. Some access restrictions were lifted beginning September 14, 2001; however, substantial access restrictions were in place through September 18. From September 19, most of the remaining restrictions were to cordon off the area being excavated and provide access for heavy machinery and emergency vehicles. The September 11 terrorist attacks extensively damaged the telecommunications infrastructure serving lower Manhattan, disrupting voice and data communications services throughout the area. (We discuss the impact of the attacks on telecommunications infrastructure and telecommunications providers’ recovery efforts in more detail in appendix I of this report.) Most of this damage occurred when 7 World Trade Center, itself heavily damaged by the collapse of the twin towers, collapsed into a major telecommunications center at 140 West Street operated by Verizon, the major telecommunications provider for Manhattan. The collateral damage inflicted on that Verizon central office significantly disrupted local telecommunications services to approximately 34,000 businesses and residences in the surrounding area, including the financial district. Damage to the facility was compounded when water from broken mains and fire hoses flooded cable vaults located in the basement of the building and shorted out remaining cables that had not been directly cut by damage and debris. As shown in figure 7, the damage to this key facility was extensive. Because of the damage to Verizon facilities and equipment, significant numbers of customers lost telecommunications services for extended periods. When Verizon’s 140 West Street central office was damaged, about 182,000 voice circuits, more than 1.6 million data circuits, almost 112,000 private branch exchange (PBX) trunks, and more than 11,000 lines serving Internet service providers were lost. As shown in figure 8, this central office served a large part of lower Manhattan. The attacks also damaged other Verizon facilities and affected customers in areas beyond that served directly from the Verizon West Street central office. Three other Verizon switches in the World Trade Center towers and in 7 World Trade Center were also destroyed in the attacks. Additional services were disrupted because 140 West Street also served as a transfer station on the Verizon network for about 2.7 million circuits carrying data traffic that did not originate or terminate in that serving area, but that nevertheless passed through that particular physical location. For example, communications services provided out of the Verizon Broad Street central office that passed through West Street were also disrupted until new cabling could be put in place to physically carry those circuits around the damaged facility. As a result, a total of about 4.4 million Verizon data circuits had to be restored. Other telecommunications carriers that serviced customers in the affected area also experienced damage and service disruptions. For example, in 140 West Street, 30 telecommunications providers had equipment that linked their networks to Verizon. Other firms lost even more equipment than Verizon. For example, AT&T lost a key transmission facility that serviced its customers in lower Manhattan and had been located in one of the World Trade Center towers. The attacks also caused major power outages in lower Manhattan. Con Edison, the local power provider, lost three power substations and more than 33 miles of cabling; total damage to the power infrastructure was estimated at $410 million. As a result, more than 13,000 Con Edison business customers lost power, which required them to either relocate operations or use alternative power sources such as portable generators. To restore telecommunications and power, service providers had to overcome considerable challenges. Access restrictions made this work more difficult—staff from WorldCom told us that obtaining complete clearance through the various local, state, and federal officials, including the National Guard, took about 2 days. In some cases, environmental and other factors also prevented restoration efforts from beginning. According to Verizon staff, efforts to assess the damage and begin repairs on 140 West Street initially were delayed by concerns over the structural integrity of the damaged facility and other nearby buildings; several times staff had to halt assessment and repair efforts because government officials ordered evacuations of the building. In some cases, infrastructure providers employed innovative solutions to restore telecommunications and power quickly. For example, these providers placed both telecommunications and power cables that are normally underground directly onto the streets and covered them with temporary plastic barriers. Con Edison repair staff also had tanks of liquid nitrogen placed on street corners so that their employees could freeze cables, which makes them easier to cut when making repairs. To work around the debris that blocked access to 140 West, Verizon staff ran cables over the ground and around damaged cabling to quickly restore services. Because of damage to the reinforced vault that previously housed the cables at Verizon’s facility, a new cable vault was reconstructed on the first floor, and cables were run up the side of the building to the fifth and eighth floors, as shown in figure 9. Although the facilities of the stock and options exchanges and clearing organizations in lower Manhattan were largely undamaged by the attacks, many market participants were affected by the loss of telecommunications and lack of access to lower Manhattan. As a result, many firms, including some of the broker-dealers responsible for significant portions of the overall securities market trading activity, were forced to relocate operations to backup facilities and alternative locations. To resume operations, these new facilities had to be prepared for trading and provided with sufficient telecommunications capacity. Some firms had to have telecommunications restored although they thought they had redundant communications services. Regulators and market participants delayed the opening of the stock and options market until September 17, until the key broker-dealers responsible for large amounts of market liquidity were able to operate and telecommunications had been tested. Although several securities exchanges and market support organizations were located in the vicinity of the attacks, most did not experience direct damage. The NYSE, Depository Trust and Clearing Corporation, Securities Industry Automation Corporation (SIAC), International Securities Exchange, and the Island ECN all had important facilities located in close proximity to the World Trade Center, but none of these organizations’ facilities were damaged. The American Stock Exchange (Amex) was the only securities exchange that experienced incapacitating damage. Amex was several hundred feet from the World Trade Center towers, but sustained mostly broken windows and damage to some offices. However, its drainage and ventilation systems were clogged by dust and debris and the building lost power, telephones, and access to water and steam. The loss of steam and water coupled with the inadequate drainage and ventilation meant that Amex computer systems could not run due to a lack of air conditioning. As a result, the Amex building was not cleared for reoccupation until October 1, 2001, after inspectors had certified the building as structurally sound and power and water had been fully restored. Although the remaining exchanges were not damaged, U.S. stock and options exchanges nationwide closed the day of the attacks and did not reopen until September 17, 2001. However, regulators and market participants acknowledged that if the major exchanges or clearing organizations had sustained damage, trading in the markets would have likely taken longer to resume. Although most exchanges and market support organizations were not damaged by the attacks, several key firms with substantial operations in the area sustained significant facilities damage. As a result of this damage and the inability to access the area in the days following the attacks, many financial institution participants had to relocate their operations, in some cases using locations not envisioned by their BCPs. They then faced the challenge of recreating their key operations and obtaining sufficient telecommunications services at these new locations. For example, one large broker-dealer with headquarters that had been located across from the World Trade Center moved operations to midtown Manhattan, taking over an entire hotel. To resume operations, firms had to obtain computers and establish telecommunications lines in the rooms that were converted to work spaces. Another large broker-dealer whose facilities were damaged by the attacks attempted to reestablish hundreds of direct lines to its major customers after relocating operations to the facilities of a recently purchased broker-dealer subsidiary in New Jersey. The simultaneous relocation of so many firms meant that they also had to establish connections to the new operating locations of other organizations. Although Verizon managers were unable to estimate how much of its restoration work in the days following the attacks specifically addressed such needs, they told us that considerable capacity was added to the New Jersey area to accommodate many of the firms that relocated operations there, including financial firms. Restoring operations often required innovative approaches. According to representatives of the exchanges and other financial institutions we spoke with, throughout the crisis financial firms that are normally highly competitive instead exhibited a high level of cooperation. In some cases, firms offered competitors facilities and office space. For example, traders who normally traded stocks on the Amex floor obtained space on the trading floor of NYSE, and Amex options traders were provided space at the Philadelphia Stock Exchange. In some cases, innovative approaches were used by the exchanges and utilities to restore lost connectivity to their customers. For example, technicians at the Island ECN created virtual private network connections for those users whose services were disrupted. Island also made some of its trading applications available to its customers through the Internet. In another example, SIAC, which processes trades for NYSE and the American Stock Exchange, worked closely with its customers to reestablish their connectivity, reconfiguring customers’ working circuits that had been used for testing or clearing and settlement activities to instead transmit data to SIAC’s trading systems. The Bond Market Association, the industry association representing participants in the government and other debt markets, and the Securities Industry Association (SIA), which represents participants in the stock markets, played critical roles in reopening markets. Both associations helped arrange daily conference calls with market participants and regulators to address the steps necessary to reopen the markets. At times, hundreds of financial industry officials were participating in these calls. These organizations also made recommendations to regulators to provide some relief to their members so that they could focus on restoring their operations. For example, the Bond Market Association recommended to its members that they extend the settlement date for government securities trades from the day following trade date (T+1) to five days after to help alleviate some of the difficulties that were occurring in the government securities markets. Through a series of conference calls with major banks and market support organizations, SIA was instrumental in helping to develop an industrywide consensus on how to resolve operational issues arising from the damage and destruction to lower Manhattan and how to mitigate operational risk resulting from the destruction of physical (that is, paper) securities, which some firms had maintained for customers. SEC also took actions to facilitate the successful reopening of the markets. To allow market participants to focus primarily on resuming operations, SEC issued rules to provide market participants temporary relief from certain regulatory requirements. For example, SEC extended deadlines for disclosure and reporting requirements, postponed the implementation date for new reporting requirements, and temporarily waived some capital regulation requirements. SEC implemented other relief measures targeted toward stabilizing the reopened markets. For example, SEC relaxed rules that restrict corporations from repurchasing their own shares of publicly traded stock, and simplified registration requirements for airline and insurance industries so that they could more easily raise capital. Partially because of the difficulties experienced by many firms in restoring operations and obtaining adequate telecommunications service, the reopening of the markets was delayed. Although thousands of broker- dealers may participate in the securities markets, staff at NYSE and NASDAQ told us that a small number of firms account for the majority of the trading volume on their markets. Many of those firms had critical operations in the area affected by the attacks. For example, 7 of the top 10 broker-dealers ranked by capital had substantial operations in the World Trade Center or the World Financial Center, across from the World Trade Center. In the immediate aftermath of the attack, these and other firms were either attempting to restore operations at their existing locations or at new locations. In addition, financial market participant staff and the financial regulators told us that their staffs did not want to return to the affected area too soon to avoid interfering with the rescue and recovery efforts. For example, the SEC Chairman told us that he did not want to send 10,000 to 15,000 workers into lower Manhattan while the recovery efforts were ongoing and living victims were still being uncovered. Because of the considerable efforts required for broker-dealers to restore operations, insufficient liquidity existed to open the markets during the week of the attacks. According to regulators and exchange staff, firms able to trade by Friday, September 14, accounted for only about 60 percent of the market’s normal order flow. As a result, securities regulators, market officials, and other key participants decided that, until more firms were able to operate normally, insufficient liquidity existed in the markets. Opening the markets with some firms but not others was also viewed as unfair to many of the customers of the affected firms. Although institutional clients often have relationships with multiple broker-dealers, smaller customers and individual investors usually do not; thus, they may not have been able to participate in the markets under these circumstances. In addition, connectivity between market participants and exchanges had not been tested. For this reason, it was unclear how well the markets would operate when trading resumed because so many critical telecommunication connections were damaged in the attacks and had been either repaired or replaced. Staff from the exchanges and market participants told us that the ability to conduct connectivity testing prior to the markets reopening was important. Many firms experienced technical difficulties in getting the new connections they had obtained to work consistently as telecommunication providers attempted to restore telecommunications service. According to officials at one exchange, restoring connections to its members was difficult because existing or newly restored lines that were initially operational would erratically lose their connectivity throughout the week following September 11. Representatives of the exchanges and financial regulators with whom we met told us that opening the markets but then having to shut them down again because of technical difficulties would have greatly reduced investor confidence. Because of the need to ensure sufficient liquidity and a stable operating environment, market participants and regulators decided to delay the resumption of stock and options trading until Monday, September 17. This delay allowed firms to complete their restoration efforts and use the weekend to test connectivity with the markets and the clearing organizations. As a result of these efforts, the stock and options markets reopened on September 17 and traded record volumes without significant operational difficulties. The attacks also severely disrupted the markets for government securities and money market instruments primarily because of the impact on the broker-dealers that trade in the market and on one of the key banks that perform clearing functions for these products. According to regulatory officials, at the time of the attacks, eight of the nine IDBs, which provide brokerage services to other dealers in government securities, had operations that were severely disrupted following the attacks. The most notable was Cantor Fitzgerald Securities, whose U.S. operations had been located on several of the highest floors of one of the World Trade Center towers. Because much of the trading in the government securities market occurs early in the day, the attacks and subsequent destruction of the towers created massive difficulties for this market. When these IDBs’ facilities were destroyed, the results of trading, including information on which firms had purchased securities and which had sold, also were largely lost. These trades had to be reconstructed from the records of the dealers who had conducted trades with the IDBs that day. In addition, with the loss of their facilities, most of the primary IDBs were not able to communicate with the Government Securities Clearing Corporation (GSCC), which also complicated the clearing and settlement of these trades. Staff from financial market participants told us that reconciling some of these transactions took weeks, and in some cases, months. Two banks—the Bank of New York (BONY) and JP Morgan Chase—were the primary clearing banks for government securities. Clearing banks are essentially responsible for transferring funds and securities for their dealer and other customers that purchase or sell government securities. For trades cleared through GSCC, the clearing organization for these instruments, instructs its dealer members and the clearing banks as to the securities and associated payments to be transferred to settle its members’ net trade obligations. As a result of the attacks, BONY and its customers experienced telecommunications and other problems that contributed to the disruption in the government securities market because it was the clearing bank for many major market participants and because it maintained some of GSCC’s settlement accounts. BONY had to evacuate four facilities including its primary telecommunications data center and over 8,300 staff, because they were located near the World Trade Center. At several of these facilities, BONY conducted processing activities as part of clearing and settling government securities transactions on behalf of its customers and GSCC. The communication lines between BONY and the Fedwire systems for payment and securities transfers, as well as those between BONY and its clients, were critical to BONY’s government securities operations. Over these lines, BONY transmitted data with instructions to transfer funds and securities from its Federal Reserve accounts to those of other banks for transactions in government securities and other instruments. BONY normally accessed its Federal Reserve accounts from one of the lower Manhattan facilities that had to be abandoned. In the days following the attacks, BONY had difficulties in reestablishing its Fedwire connections and processing transactions. In addition, many BONY customers also had to relocate and had their own difficulties in establishing connections to the BONY backup site. As a result of these internal processing problems and inability to communicate with its customers, BONY had problems determining what amounts should be transferred on behalf of the clients for whom it performed clearing services. For example, by September 12, 2001, over $31 billion had been transferred to BONY’s Federal Reserve account for GSCC, but because BONY could not access this account, it could not transfer funds to which its clients were entitled. BONY was not able to establish connectivity with GSCC and begin receiving and transmitting instructions for payment transfers until September 14, 2001. The problems at the IDBs and BONY affected the ability of many government securities and money markets participants to settle their trades. Before a trade can be cleared and settled, the counterparties to the trade and the clearing banks must compare trade details by exchanging messages to ensure that each is in agreement on the price and amount of securities traded. To complete settlement, messages then must be exchanged between the parties to ensure that the funds and ownership of securities are correctly transferred. If trade information is not correct and funds and securities are not properly transferred, the trade will be considered a “fail.” As shown in figure 10, failed transactions increased dramatically, rising from around $500 million per day to over $450 billion on September 12, 2001. The level of fails also stayed high for many days following the attacks, averaging about $100 billion daily through September 28. The problems in the government securities markets also created liquidity problems for firms participating in and relying on these markets to fund their operations. Many firms, including many large broker-dealers, fund their operations using repurchase agreements, or repos, in which one party sells government securities to another party and agrees to repurchase those securities on a future date at a fixed price. Because repos are used to finance firms’ daily operations, many of these transactions are executed before 9:00 a.m. As a result, by the time the attacks occurred on September 11, over $500 billion in repos had been transacted. With so many IDB records destroyed, many of the transactions could not be cleared and settled, causing many of these transactions to fail. As a result, some firms that relied on this market as a funding source experienced major funding shortfalls. Although trading government securities was officially resumed within 2 days of the attacks, overall trading activity was low for several days. For example, as shown in figure 11, trading volumes went from around $500 billion on September 10 to as low as $9 billion on September 12, 2001. Similarly, repo activity fell from almost $900 billion on September 10 to $145 billion on September 13. The attacks also disrupted the markets for commercial paper, which are short-term securities issued by financial and other firms to raise funds. According to clearing organization officials, the majority of commercial paper redemptions—when the investors that originally purchased the commercial paper have their principal returned-- that were scheduled to be redeemed on September 11 and September 12 were not paid until September 13. Firms that relied on these securities to fund their operations had to obtain other sources of funding during this period. The Federal Reserve took several actions to mitigate potential damage to the financial system resulting from liquidity disruptions in these markets. Banking regulatory staff told us that the attacks largely resulted in a funding liquidity problem rather than a solvency crisis for banks. Thus, the challenge they faced was ensuring that banks had adequate funds to meet their financial obligations. The settlement problems also prevented broker- dealers and others from using the repo markets to fund their daily operations. Soon after the attacks, the Federal Reserve announced that it would remain open to help banks meet their liquidity needs. Over the next 4 days, the Federal Reserve provided about $323 billion to banks through various means to overcome the problems resulting from unsettled government securities trades and financial market dislocations. For example, from September 11 through September 14, the Federal Reserve loaned about $91 billion to banks through its discount window, in contrast to normal lending levels of about $100 million. It also conducted securities purchase transactions and other open market operations of about $189 billion to provide needed funds to illiquid institutions. Had these actions not been taken, some firms unable to receive payments may not have had sufficient liquidity to meet their other financial obligations, which could have produced other defaults and magnified the effects of September 11 into a systemic solvency crisis. Regulators also took action to address the failed trades resulting from the attacks. From September 11 through September 13, the Federal Reserve loaned $22 billion of securities from its portfolio to broker-dealers that needed securities to complete settlements of failed trades. According to Federal Reserve staff, the Federal Reserve subsequently reduced restrictions on its securities lending that led to a sharp increase in borrowings at the end of September 2001. Treasury also played a role in easing the failed trades and preventing a potential financial crisis by conducting an unplanned, special issuance of 10-year notes to help address a shortage of notes of this duration in the government securities markets. Market participants typically use these securities as collateral for financing or to meet settlement obligations. To provide dollars needed by foreign institutions, the Federal Reserve also conducted currency swaps with the Bank of Canada, the European Central Bank, and the Bank of England. The swaps involved exchanging dollars for the foreign currencies of these jurisdictions, with agreements to re- exchange amounts later. These temporary arrangements provided funds to settle dollar-denominated obligations of foreign banks whose U.S. operations were affected by the attacks. The Federal Reserve, Federal Deposit Insurance Corporation, OCC, and the Office of Thrift Supervision issued a joint statement after the attacks to advise the institutions they oversee that any temporary declines in capital would be evaluated in light of the institution’s overall financial condition. The Federal Reserve also provided substantial amounts of currency so that banks would be able to meet customer needs. With a few exceptions, commercial banks were not as adversely affected as broker- dealers by the attacks. Although some banks had some facilities and operations in lower Manhattan, they were not nearly as geographically concentrated as securities market participants. As discussed previously, BONY was one bank with significant operations in the World Trade Center area, but only a limited number of other large banks had any operations that were affected. According to regulatory officials that oversee national banks, seven of their institutions had operations in the areas affected by the attacks. Most payment system operations continued with minimal disruption. The Federal Reserve Bank of New York (FRBNY) manages the Federal Reserve’s Fedwire securities and payments transfer systems. Although the FRBNY sustained damage to some telecommunications lines, Fedwire continued processing transactions without interruption because the actual facilities that process the transactions are not located in lower Manhattan. However, Federal Reserve officials noted that some banks experienced problems connecting to Fedwire because of the widespread damage to telecommunications systems. Over 30 banks lost connectivity to Fedwire because their data first went to the FRBNY facility in lower Manhattan before being transmitted to Fedwire’s system’s processing facility outside the area. However, most were able to reestablish connections through dial- up backup systems and some began reporting transfer amounts manually using voice lines. Federal Reserve officials noted that normal volumes for manually reported transactions were about $200–$400 million daily, but from September 11 through September 13, 2001, banks conducted about $151 billion in manually reported transactions. A major private-sector payments system, CHIPS, also continued to function without operational disruptions, although 19 of its members temporarily lost connectivity with CHIPs in the aftermath of the attacks and had to reconnect from backup facilities. Retail payments systems, including check clearing and automated clearing house transactions, generally continued to operate. However, the grounding of air transportation did complicate and delay some check clearing, since both the Federal Reserve and private providers rely on overnight air delivery to transport checks between banks in which they are deposited and banks from which they are drawn. Federal Reserve officials said they were able to arrange truck transportation between some check clearing offices until they were able to gain approval for their chartered air transportation to resume several days later. According to Federal Reserve staff, transporting checks by ground slowed processing and could not connect all offices across the country. The staff said that the Federal Reserve continued to credit the value of deposits to banks even when it could not present checks and debit the accounts of paying banks. This additional liquidity —normally less than $1 billion—peaked at over $47 billion on September 13, 2001. The terrorist attacks revealed that limits that existed in market participants’ business continuity capabilities at the time of the attacks. Based on our discussions with market participants, regulators, industry associations and others, the BCPs of many organizations had been too limited in scope to address the type of disaster that occurred. Instead, BCPs had procedures to address disruptions affecting a single facility such as power outages or fires at one building. For example, a 1999 SEC examination report of a large broker-dealer that we reviewed noted that in the event of an emergency this firm’s BCP called for staff to move just one- tenth of a mile to another facility. By not planning for wide-scale events, many organizations had not invested in backup facilities that could accommodate key aspects of their operations, including several of the large broker-dealers with primary operations located near the World Trade Center that had to recreate their trading operations at new locations. Similarly, NYSE and several of the other exchanges did not have backup facilities at the time of the attacks from which they could conduct trading. The attacks also illustrated that some market participants’ backup facilities were too close to their primary operations. For example, although BONY had several backup facilities for critical functions located several miles from the attacks, the bank also backed up some critical processes at facilities that were only blocks away. According to clearing organization and regulatory staff, one of the IDBs with facilities located in one of the destroyed towers of the World Trade Center had depended on backup facilities in the other tower. Additionally, firms’ BCPs did not adequately take into account all necessary equipment and other resources needed to resume operations as completely and rapidly as possible. For example, firms that occupied backup facilities or other temporary space found that they lacked sufficient space for all critical staff or did not have all the equipment needed to conduct their operations. Others found that their backup sites did not have the most current versions of the software and systems that they use, which caused some restoration problems. Some firms had contracted with third-party vendors for facilities and equipment to conduct operations during emergencies, but because so many firms were disrupted by the attacks, some of these facilities were overbooked, and firms had to find other locations in which to resume operations. Organizations also learned that their BCPs would have to better address human capital issues. For example, some firms had difficulties in locating key staff in the confusion after the attacks. Others found that staff were not able to reach their backup locations as quickly as their plans had envisioned due to the closure of public transit systems, bridges, and roads. Other firms had not planned for the effects of the trauma and grief on their staff and had to provide access to counseling for those that were overwhelmed by the events. The attacks also revealed the need to improve some market participants’ business continuity capabilities for telecommunications. According to broker-dealers and regulator staff with whom we spoke, some firms found that after relocating their operations, they learned that their backup locations connected to the primary sites of the organizations critical to their operations but not to these organizations’ backup sites. Some financial firms that did not have damaged physical facilities nonetheless learned that their supporting telecommunications services were not as diverse and redundant as they expected. Diversity involves establishing different physical routes in and out of a building, and using different equipment along those routes if a disaster or other form of interference adversely affects one route. Redundancy involves having extra capacity available, generally from more than one source, and also incorporates aspects of diversity. Therefore, users that rely on telecommunications services to support important applications try to ensure that those services use facilities that are diverse and redundant so that no single point in the communications path can cause all services to fail. Ensuring that carriers actually maintain physically redundant and diverse telecommunications services has been a longstanding concern within the financial industry. For example, the President’s National Security Telecommunications Advisory Committee in December 1997 reported, “despite assurances about diverse networks from the carriers, a consistent concern among the financial services industry was the trustworthiness of their telecommunications diversity arrangements.” This concern was validated following the September 11 attacks when firms that thought they had achieved redundancy in their communications systems learned that their network services were still disrupted. According to regulators and financial market participants with whom we spoke, some firms that made arrangements with multiple service providers to obtain redundant service discovered that the lines used by their providers were not diverse because they routed through the same Verizon switching facility. Other firms that had mapped out their communications lines to ensure that their lines flowed through physically diverse paths at the time those services were first acquired found that their service providers had rerouted some of those lines over time without their knowledge, eliminating that assurance of diversity in the process. The attacks demonstrated that the ability of U.S. financial markets to remain operational after disasters depends to a great extent on the preparedness of not only the exchanges and clearing organizations but also the major broker-dealers and banks that participate in these markets. The various financial markets were severely affected and the stock and options exchanges were closed in the days following the attacks for various reasons, including the need to conduct rescue operations. However, the markets also remained closed because of the time required for several major broker-dealers that normally provide the bulk of the liquidity for trading in the stock, options, and government securities markets to become operational. Although the attacks were of a nature and magnitude beyond that previously imagined, they revealed the need to address limitations in the business continuity capabilities of many organizations and to mitigate the concentration of critical operations in a limited geographic area. Many organizations will have to further assess how vulnerable their operations are to disruptions and determine what capabilities they will need to increase the likelihood of being able to resume operations after such events. Since the attacks, exchanges, clearing organizations, ECNs, and payment system processors implemented various physical and information security measures and business continuity capabilities to reduce the risk that their operations would be disrupted by attacks, but some organizations continued to have limitations in their preparedness that increases their risk of disruption. With threats to the financial markets potentially increasing, organizations must choose how best to use their resources to reduce risks by investing in protection against physical and electronic attacks for facilities, personnel, and information systems and developing capabilities for continuing operations. To reduce the risk of operations disruptions, the 15 financial market organizations—including the 7 critical ones—we reviewed in 2002 had taken many steps since the attacks to protect their physical facilities or information systems from attacks and had developed plans for recovering from such disruptions. However, at the time we conducted our review, 9 of the 15 organizations, including 2 we considered critical to the functioning of the financial markets, had not taken steps to ensure that they would have the staff necessary to conduct their critical operations if the staff at their primary site were incapacitated—including 8 organizations that also had physical vulnerabilities at their primary sites. Ten of the 15 organizations, including 4 of the critical organizations, also faced increased risk of being unable to operate after a wide-scale disruption because they either lacked backup facilities or had backup facilities near their primary sites. Finally, although many of the 15 organizations had attempted to reduce their risks by testing some of their risk reduction measures, only 3 were testing their physical security measures, only 8 had recently assessed the vulnerabilities of their key information systems, and only 7 had fully tested their BCPs. Faced with varying and potentially increasing threats that could disrupt their operations, organizations must make choices about how to best use their resources to both protect their facilities and systems and develop business continuity capabilities. September 11, 2001, illustrated that such attacks can have a large-scale impact on market participants. Law enforcement and other government officials are concerned that public and private sectors important to the U.S. economy, including the financial markets, may be increasingly targeted by hostile entities that may have increasing abilities to conduct such attacks. For example, the leader of the al Qaeda organization was quoted as urging that attacks be carried out against the “pillars of the economy” of the United States. Press accounts of captured al Qaeda documents indicated that members of this organization may be increasing their awareness and knowledge of electronic security techniques and how to compromise and damage information networks and systems, although the extent to which they could successfully conduct sophisticated attacks has been subject to debate. A recent report on U.S. foreign relations also notes that some foreign countries are accelerating their efforts to be able to attack U.S. civilian communications systems and networks used by institutions important to the U.S. economy, including those operated by stock exchanges. The physical threats that individual organizations could reasonably be expected to face vary by type and likelihood of occurrence. For example, events around the world demonstrate that individuals carrying explosive devices near or inside facilities can be a common threat. More powerful explosive attacks by vehicle are less common but still have been used to devastating effect in recent years. Other less likely, but potentially devastating, physical threats include attacks involving biological or chemical agents such as the anthrax letter mailings that occurred in the United States in 2001 and the release of a nerve agent in the Tokyo subway in 1995. Faced with the potential for such attacks, organizations can choose to invest in a range of physical security protection measures to help manage their risks. The Department of Justice has developed standards that identify measures for protecting federal buildings from physical threats. To reduce the likelihood of incurring damage from individuals or explosives, organizations can physically secure perimeters by controlling vehicle movement around a facility, using video monitoring cameras, increasing lighting, and installing barriers. Organizations can also prevent unauthorized persons or dangerous devices from entering their facilities by screening people and objects, restricting lobby access, and only allowing employees or authorized visitors inside. Organizations could also take steps to prevent biological or chemical agents from contaminating facilities by opening and inspecting mail and deliveries off-site. To protect sensitive data, equipment, and personnel, organizations can also take steps to secure facility interiors by using employee and visitor identification systems and restricting access to critical equipment and utilities such as power and telecommunications equipment. Organizations can also reduce the risk of operations disruptions by investing in measures to protect information systems. Information system threats include hackers, who are individuals or groups attempting to gain unauthorized access to networks or systems to steal, alter, or destroy information. Another threat—known as a denial of service attack— involves flooding a system with messages that consume its resources and prevent authorized users from accessing it. Information systems can also be disrupted by computer viruses that damage data directly or degrade system performance by taking over system resources. Information security guidance used for reviews of federal organizations recommend that organizations develop policies and procedures that cover all major systems and facilities and outline the duties of those responsible for security. To prevent unauthorized access to networks and information systems, organizations can identify and authenticate users by using software and hardware techniques such as passwords, firewalls, and other filtering devices. Organizations can also use monitoring systems to detect unauthorized attempts to gain access to networks and information systems and develop response capabilities for electronic attacks or breaches. Investing in business continuity capabilities is another way that organizations can reduce the risk that their operations will be disrupted. According to guidance used by private organizations and financial regulators, developing a sound BCP requires organizations to determine which departments, business units, or functions are critical to operations. The organizations should then prepare a BCP that identifies capabilities that have to be in place, resources required, and procedures to be followed for the organization to resume operations. Such capabilities can include backup facilities equipped with the information technology hardware and software that the organization needs to conduct operations. Alternatively, organizations can replace physical locations or processes, such as trading floors, with electronic systems that perform the same core functions. Many organizations active in the financial markets are critically dependent on telecommunications services for transmitting the data or voice traffic necessary to operate. As a result, organizations would have to identify their critical telecommunications needs and take steps to ensure that services needed to support critical operations will be available after a disaster. Finally, BCP guidance such as FISCAM, which provides standards for audits of federal information systems, also recommends that organizations have backup staff that can implement BCP procedures. To the extent that an organization’s ability to resume operations depends on the availability of staff with specific expertise, the organization has to maintain staff capable of conducting its critical functions elsewhere. Given that most organizations have limited resources, effectively managing the risk of operations disruptions involves making trade-offs between investing in protection of facilities, personnel, and systems or development of business continuity capabilities. For example, organizations must weigh the expected costs of operations disruptions against the expected cost of implementing security protections, developing facilities, or implementing other business continuity capabilities to ensure that they would be able to resume operations after a disaster. Risk management guidance directs organizations to identify how costly various types of temporary or extended outages or disruptions would be to parts or all of their operations. Such costs stem not only from revenues actually lost during the outage, but also from potential lost income because of damage to the organization’s reputation stemming from its inability to resume operations. In addition to estimating the potential costs of disruptions, organizations are advised to identify potential threats that could cause such disruptions and estimate the likelihood of these events. By quantifying the costs and probabilities of occurrence of various disruptions, an organization can then better evaluate the amount and how to allocate the resources that it should expend on either implementing particular protection measures or attaining various business continuity capabilities. For example, an organization whose primary site is located in a highly trafficked, public area may have limited ability to reduce all of its physical security risks. However, such an organization could reduce the risk of its operations being disrupted by having a backup facility manned by staff capable of supporting its critical operations or by cross-training other staff. The 15 exchanges, clearing organizations, ECNs, and payment system processors we reviewed in 2002 had invested in various physical and information protections and business continuity capabilities to reduce the risk that their operations would be disrupted. Each of these 15 organizations had implemented physical security measures to protect facilities and personnel. To establish or increase perimeter security, some organizations had erected physical barriers around their facilities such as concrete barriers, large flowerpots, or boulders. To reduce the likelihood that its operations would be disrupted by vehicle-borne explosives, one organization had closed off streets adjacent to its building and had guards inspect all vehicles entering the perimeter. Some organizations were also using electronic surveillance to monitor their facilities, with some organizations having 24-hour closed circuit monitoring by armed guards. Others had guards patrolling both the interior and exterior of their facilities on a 24-hour basis. In addition, all of these organizations had taken measures to protect the security of their interiors. For example, the organizations required employee identification, electronic proximity cards, or visitor screening. All 15 organizations had taken measures to reduce the risk that electronic threats would disrupt their operations. The securities markets already use networks and information systems that reduce their vulnerability to external intrusion in several ways. First, the securities exchanges and clearing organizations have established private networks that transmit traffic only to and from their members’ systems, which are therefore more secure than the Internet or public telephone networks. Second, traffic on the exchange and clearing organization networks uses proprietary message protocols or formats, which are less vulnerable to the insertion of malicious messages or computer viruses. Although rendering the securities market networks generally less vulnerable, these features do not completely protect them and the prominence of securities market participants’ role in the U.S. economy means that their networks are more likely to be targeted for electronic attack than some other sectors. The 15 organizations we reviewed in 2002 had generally implemented the elements of a sound information security program, including policies and procedures and access controls. Thirteen of the 15 organizations were also using intrusion detection systems, and the remaining 2 had plans to implement or were considering implementing such systems. All 15 of the organizations also had procedures that they would implement in the event of systems breaches, although the comprehensiveness of the incident response procedures varied. For example, 2 organizations’ incident response plans involved shutting down any breached systems, but lacked documented procedures for taking further actions such as gathering evidence on the source of the breach. Developing business continuity capabilities is another way to reduce the risk of operations disruptions, and all 15 of the organizations we reviewed in 2002 had plans for continuing operations. These plans had a variety of contingency measures to facilitate the resumption of operations. For example, 11 organizations had backup facilities to which their staff could relocate if disruptions occurred at the primary facility. One of these organizations had three fully equipped and staffed facilities that could independently absorb all operations in an emergency or disruption. In some cases, organizations did not have backup facilities that could accommodate their operations but had taken steps to ensure that key business functions could be transferred to other organizations. For example, staff at one exchange that lacked a backup facility said that most of the products it traded were already traded on other exchanges, so trading of those products would continue if its primary site was not available. In addition, this exchange has had discussions with other exchanges about transferring trading of proprietary products to the other exchanges in an emergency situation. These organizations all had inventoried critical telecommunications and had made arrangements to ensure that they would continue to have service if primary lines were damaged. Although all 15 organizations we reviewed had taken steps to address physical and electronic threats and had BCPs to respond to disruptive events, but at the time of our review many had limitations in their preparedness that increased the risk of an operations disruption. Nine of the 15 organizations, including 2 critical organizations, were at greater risk of experiencing an operations disruption because their BCPs did not address how they would recover if a physical attack on their primary facility left a large percentage of their staff incapacitated. Although 5 of these 9 organizations had backup facilities, they did not maintain staff outside of their primary facility that could conduct all their critical operations. Eight of the 9 organizations also had physical security vulnerabilities at their primary sites that they either had not or could not mitigate. For example, these organizations were unable to control vehicular traffic around their facilities and thus were more exposed to damage than those that did have such controls. Most of the organizations we reviewed also had faced increased risk that their operations would be disrupted by a wide-scale disaster. As of August 2002, all 7 of the critical organizations we reviewed had backup facilities, including 3 whose facilities were hundreds of miles from their primary facilities. For example, 1 organization had two data centers located about 500 miles apart, each capable of conducting the organization’s full scope of operations in the event that one site failed. The organization also has a third site that can take over the processing needed for daily operations on a next-day basis. However, the backup facilities of the other four organizations were located 2 to 5 miles from their primary sites. If a wide- scale disaster caused damage or made a region greater than these distances inaccessible, these 4 organizations would be at greater risk for not being able to resume operations promptly. Many of the other 8 organizations also had faced increased risk that their operations would be disrupted by wide-scale disasters. At the time we conducted our review, 2 of the 8 organizations had backup facilities that were hundreds of miles from their primary operations. The remaining 6 organizations faced increased risk of being disrupted by a wide-scale disaster because 4 lacked backup facilities, while 2 organizations had backup facilities that were located 4 to 10 miles from their primary operations facilities. Of the 4 organizations that lacked a backup facility, one had begun constructing a facility near its primary site. Four of the organizations that lacked regionally dispersed backup facilities told us that they had begun efforts to become capable of conducting their operations at locations many miles from their current primary and backup sites. For example, NYSE has announced that it is exploring the possibility of creating a second active trading floor some miles from its current location. In contrast to the backup trading location NYSE built in the months following the attack, which would only be active should its current primary facility become unusable, the exchange plans to move the trading of some securities currently traded at its primary site to this new facility and have both sites active each trading day. However, if the primary site were damaged, the new site would be equipped to be capable of conducting all trading. In December 2002, NYSE staff told us that they were still evaluating the creation of this second active trading floor. For the organizations that lacked backup facilities, cost was the primary obstacle to establishing such capabilities. For example, staff at one organization told us that creating a backup location for its operations would cost about $25 million, or as much as 25 percent of the organization’s total annual revenue. Officials at the 3 organizations without backup sites noted that the products and services they provide to the markets are largely duplicated by other organizations, so their inability to operate would have minimal impact on the overall market’s ability to function. Although cost can be a limiting factor, financial market organizations have some options for creating backup locations that could be cost-effective. At least one of the organizations we reviewed has created the capability of conducting its trading operations at a site that is currently used for administrative functions. By having a dual-use facility, the organization has saved the cost of creating a completely separate backup facility. This option also would seem well suited to broker-dealers, banks, and other financial institutions because they frequently maintain customer service call centers that have large numbers of staff that could potentially be equipped with all or some of the systems and equipment needed for the firm’s trading or clearing activities. Organizations can also minimize operations risk by testing their physical and information security measures and business continuity plans, but we found the 15 exchanges, clearing organizations, ECNs, and payment system processors were not fully testing all these areas. In the case of physical security, such assessments can include attempting to infiltrate a building or other key facility such as a data processing center or assessing the integrity of automated intrusion detection systems. In the case of information security, such assessments can involve attempts to access internal systems or data from outside the organization’s network or by using software programs that identify, probe, and test systems for known vulnerabilities. For both physical and information security, these assessments can be done by the organization’s own staff, its internal auditors, or by outside organizations, such as security or consulting firms. The extent to which the 15 exchanges, clearing organizations, ECNs, and payment system providers that we reviewed had tested their physical security measures varied. Only 3 of the 7 critical financial organizations routinely tested their physical security; the tests included efforts to gain unauthorized access to facilities or smuggle fake weapons into buildings. None of the remaining 8 organizations routinely tested the physical security of their facilities. To test their information security measures, all 7 of the critical organizations had assessed network and systems vulnerabilities. We considered an organization’s assessment current if it had occurred within the 2 years prior to our visit, because system changes over time can create security weaknesses, and advances in hacking tools can create new means of penetrating systems. According to the assessments provided to us by the 7 critical organizations, all had performed vulnerability assessments of the information security controls they implemented over some of their key trading or clearing systems within the last 2 years. However, these tests were not usually done in these organizations’ operating environment but instead were done on test systems or during nontrading hours. Seven of the remaining 8 organizations we reviewed also had not generally had vulnerability assessments of their key trading or clearing networks performed with the 2 years prior to our review. However, in the last 2 years, all 15 organizations had some form of vulnerability assessments performed for their corporate or administrative systems, which they use to manage their organization or operate their informational Web sites. Most of the 7 organizations critical to overall market functioning were conducting regular tests of their business continuity capabilities. Based on our review, 5 of the 7 critical organizations had conducted tests of all systems and procedures critical to business continuity. However, these tests were not usually done in these organizations’ real-time environments. Staff at one organization told us that they have not recently conducted live trading from their backup site because of the risks, expense, and difficulty involved. Instead, some tested their capabilities by switching over to alternate facilities for operations simulations on nontrading days. One organization tested all components critical to their operations separately and over time, but it had not tested all aspects simultaneously. Of the 8 other financial market organizations we reviewed, only 2 had conducted regular BCP tests. One organization, however, had an extensive disaster recovery testing regimen that involved using three different scenarios: simulating a disaster at the primary site and running its systems and network from the backup site; simulating a disaster at the backup site and running the systems and network from the primary site; and running its systems and network from the consoles at the backup site with no staff in the control room at the primary site. Organizations also discovered the benefits of conducting such tests. For example, because of lessons learned through testing, one organization learned vital information about the capabilities of third-party applications, identified the need to configure certain in-house applications to work at the recovery site, installed needed peripheral equipment at the backup site, placed technical documentation regarding third-party application installation procedures at the backup site, and increased instruction on how to get to the backup site if normal transportation routes were unavailable. An official at this organization told us that with every test, they expected to learn something about the performance of their BCP and identify ways to improve it. The exchanges, clearing organizations, ECNs, and payment system providers that we reviewed had all taken various steps to reduce the risk that their operations would be disrupted by physical or electronic attacks. In general, the organizations we considered more critical to the overall ability of the markets to function had implemented the most comprehensive physical and information security measures and BCPs. However, limitations in some organizations’ preparedness appeared to increase the risks that their operations could be disrupted because they had physical security vulnerabilities not mitigated with business continuity capabilities. The extent to which these organizations had also reduced the risk posed by a wide-scale disruption also varied. Because the importance of these organizations’ operations to the overall markets varies, regulators are faced with the challenge of determining the extent to which these organizations should take additional actions to address these limitations to reduce risks to the overall markets. Although banking and securities regulators have begun to take steps to prevent future disasters from causing widespread payment defaults, they have not taken important actions that would better ensure that trading in critical U.S. financial markets could resume smoothly and in a timely manner after a major disaster. The three regulators for major market participants, the Federal Reserve, OCC, and SEC are working jointly with market participants to develop recovery goals and sound business continuity practices that will apply to a limited number of financial market organizations to ensure that these entities can clear and settle transactions and meet their financial obligations after future disasters. However, the regulators’ recovery goals and sound practices do not extend to organizations’ trading activities or to the stock exchanges. The regulators also had not developed complete strategies that identify where trading could be resumed or which organizations would have to be ready to conduct trading if a major exchange or multiple broker-dealers were unlikely to be operational for an extended period. Individually, these three regulators have overseen operations risks in the past. SEC has a program— the Automation Review Policy (ARP)—for reviewing exchanges and clearing organizations efforts to reduce operations risks, but this program faces several limitations. Compliance with the program is voluntary, and some organizations have not always implemented important ARP recommendations. In addition, market participants raised concerns over the inexperience and insufficient technical expertise of SEC staff, and the resources committed to the program limit the frequency of examinations. Lacking specific requirements in the securities laws, SEC has not generally examined operations risk measures in place at broker-dealers. The Federal Reserve and OCC are tasked with overseeing the safety and soundness of banks’ operations and had issued and were updating guidance that covered information system security and business continuity planning. They also reported annually examining information security and business continuity at the entities they oversee, but these reviews did not generally assess banks’ measures against physical attacks. Treasury and the financial regulators have various initiatives under way to improve the financial markets’ ability to respond to future crises (we discuss these in app. II) and assess how well the critical assets of the financial sector are being protected. As part of these initiatives, certain financial market regulators have begun to identify business continuity goals for the clearing and settling organizations for government and corporate securities. On August 30, 2002, the Federal Reserve, OCC, SEC, and the New York State Banking Department issued the Draft Interagency White Paper on Sound Practices to Strengthen the Resilience of the U.S. Financial System. The paper presents sound practices to better ensure that clearance and settlement organizations will be able to resume operations promptly after a wide-scale, regional disruption. The paper proposes these organizations adopt certain practices such as identifying the activities they perform that support these critical developing plans to recover these activities on the same business day; and having out-of-region resources sufficient to recover these operations that are not dependent on the same labor pool or transportation, telecommunications, water, and power. The regulators plan to apply the sound practices to a limited number of financial market organizations whose inability to perform certain critical functions could result in a systemic crisis that threatens the stability of the financial markets. If these organizations were unable to sufficiently recover and meet their financial obligations, other market participants could similarly default on their obligations and create liquidity or credit problems. According to the white paper, the sound practices apply to “core clearing and settlement organizations,” which include market utilities that clear and settle transactions on behalf of market participants and the two clearing banks in the government securities market. In addition, the regulators expect firms that play significant roles in these critical financial markets also to comply with sound practices that are somewhat less rigorous. The white paper indicates that probably 15 to 20 banks and 5 to 10 broker-dealers have volume or value of activity in these markets sufficient to present a systemic risk if they were unable to recover their clearing functions and settle all their transactions by the end of the business day. The regulators also sought comment on the appropriate scope and application of the white paper, including whether they should address the duration of disruption that should be planned for, the geographic concentration of backup sites, and the minimum distance between primary and backup facilities. After considering the comments they receive, the regulators intend to issue a final version in 2003 of the white paper that will present the practices to be adopted by clearance and settlement organizations for these markets. Based on our analysis of the comment letters that have been sent to the regulators as of December 2002, market participants and other commenters have raised concerns over the feasibility and cost of the practices advocated by the white paper. The organizations that have commented on the paper include banks, broker-dealers, industry associations, information technology companies and consultants, and many of these organizations complimented the regulators for focusing attention on a critical area. However, many commenters have urged the regulators to ensure that any practices issued balance the cost of implementing improved business continuity capabilities against the likelihood of various types of disruptions occurring. For example, a joint letter from seven broker-dealers and banks stated that requiring organizations to make costly changes to meet remote possibilities is not practical. Other commenters urged regulators not to mandate minimum distances between primary sites and backup locations for several reasons. For example, some commenters noted that beyond certain distances, firms cannot simultaneously process data at both locations, which the regulators acknowledged could be between 60 to 100 kilometers. Rather than specify a minimum distance, others stated that the practices should provide criteria that firms should consider in determining where to locate their backup facilities. One broker-dealer commented that it had chosen the locations of its two operating sites to minimize the likelihood that both would be affected by the same disaster or disruption. It noted that its two sites were served by separate water treatment plants and power grids and different telecommunication facilities support each. A third commonly cited concern was that the regulators should implement the practices as guidelines, rather than rules. For example, one industry association stated, “Regulators should not impose prescriptive requirements, unless absolutely necessary, in order to enhance the firms’ ability to remain competitive in the global market.” Ensuring that organizations recover their clearing functions would help ensure that settlement failures do not create a broader financial crisis, but regulators have not begun a similar effort to develop recovery goals and business continuity practices to ensure that trading activities can resume promptly in various financial markets. Trading activities are important to the U.S. economy because they facilitate many important economic functions, including providing means to productively invest savings and allowing businesses to fund operations. The securities markets also allow companies to raise capital for new ventures. Ensuring that trading activities resume in a smooth and timely manner would appear to be a regulatory goal for SEC, which is specifically charged with maintaining fair and orderly markets. However, Treasury and SEC staff told us that the white paper practices would be applied to clearing functions because such activities are concentrated in single entities for some markets or in very few organizations for others, and thus pose a greater potential for disruption. In contrast, they did not include trading activities or organizations that conduct only trading functions, such as the securities exchanges, because these activities are performed by many organizations that could substitute for each other. For example, SEC staff said that if one of the exchanges was unable to operate, other exchanges or the ECNs could trade their products. Similarly, they said that individual broker- dealers are not critical to the markets because others firms can perform their roles. Although regulators have begun to determine which organizations are critical for accomplishing clearing functions, identifying the organizations that would have to be ready for trading in U.S. financial markets to resume within a given period of time is also important. If key market participants are not identified and do not adopt sound business continuity practices, the markets may not have sufficient liquidity for fair and orderly trading. For example, in the past when NYSE experienced operations disruptions, the regional exchanges usually have also chosen to suspend trading until NYSE could resume. SEC staff have also previously told us that the regional exchanges may not have sufficient processing capacity to process the full volume usually traded on NYSE. If the primary exchanges are not operational, trading could be transferred to the ECNs, but regulators have not assessed whether such organizations have sufficient capacity to conduct such trading or whether other operational issues would hinder such trading. SEC has begun efforts to develop a strategy for resuming stock trading for some exchanges, but the plan is not yet complete and does not address all exchanges and all securities. To provide some assurance that stock trading could resume if either NYSE or NASDAQ was unable to operate after a disaster, SEC has asked these exchanges to take steps to ensure their information systems can conduct transactions in the securities that the other organization normally trades. SEC staff told us each organization will have to ensure that its systems can properly process the varying number of characters in the symbols that each uses to represent securities. However, as of December 2002, SEC had not identified the specific capabilities that the exchanges should implement. For example, NASDAQ staff said that various alternatives are being proposed for conducting this trading and each would involve varying amounts of system changes or processing capacity considerations. In addition, although each exchange trades thousands of securities, NYSE staff told us that they are proposing to accommodate only the top 250 securities, and the remainder of NASDAQ’s securities, which have smaller trading volumes, would have to be traded by the ECNs or other markets. NASDAQ staff said they planned to trade all NYSE securities if necessary. NYSE staff also said that their members have been asked to ensure that the systems used to route orders to NYSE be ready to accept NASDAQ securities by June 2003. Furthermore, although some testing is under way, neither exchange has completely tested its ability to trade the other’s securities. Strategies for other exchanges and products also have not been developed. As noted in chapter 2 of this report, trading was not resumed in U.S. stock and options markets after the attacks until several key broker-dealers were able to sufficiently recover their operations. Resuming operations after disruptions can be challenging because large broker-dealers’ trading operations can require thousands of staff and telecommunications lines. In some cases, organizations that may not appear critical to the markets in ordinary circumstances could become so if a disaster affects other participants more severely. For example, in the days following the attacks, one of the IDBs that previously had not been one of the most active firms was one of the few firms able to resume trading promptly. Lacking specific requirements under the securities laws, SEC uses a voluntary program to oversee exchange, clearing organization, and ECN information systems operations. U.S. securities laws, rules, and regulations primarily seek to ensure that investors are protected. For example, securities laws require that companies issuing securities disclose material financial information, and SRO rules require broker-dealers to determine the suitability of products before recommending them to their customers. The regulations did not generally contain specific requirements applicable to physical or information system security measures or business continuity capabilities. However, as part of its charge to ensure fair and orderly markets and to address information system and operational problems experienced by some markets during the 1980s, SEC created a voluntary program—ARP—that covered information technology issues at the exchanges, clearing organizations and, eventually, ECNs. SEC’s 1989 ARP statement called for the exchanges and clearing organizations to establish comprehensive planning and assessment programs to test system capacities, develop contingency protocols and backup facilities, periodically assess the vulnerability of their information systems to external or internal threats, and report the results to SEC. SEC issued an additional ARP statement in 1991 that called for exchanges and clearing organizations to obtain independent reviews—done by external organizations or internal auditors—of their general controls in several information system areas. SEC’s ARP staff conducted examinations of exchanges, clearing organizations, and ECNs that addressed their information security and business continuity. The examinations are based on ARP policy statements that cover information system security, business continuity planning, and physical security at data and information systems centers, but do not address how organizations should protect their entire operations from physical attacks. SEC’s ARP program staff explained that they analyze the risks faced by each organization to determine which are the most important to review. As a result, the staff is not expected to review every issue specific to the information systems or operations of each exchange, clearing organization, and ECN during each examination. We found that SEC ARP staff were reviewing important operations risks at the organizations they examined. Based on our review of the 10 most recent ARP examinations completed between January 2001 and July 2002, 9 covered information system security policies and procedures, and 7 examinations covered business continuity planning. Only one examination—done after the September 11, 2001, attacks—included descriptions of the overall physical security improvements. SEC ARP staff told us that telecommunications resiliency was a part of normal examinations, but none of the examination reports we reviewed specifically discussed these organizations’ business continuity measures for ensuring that their telecommunications services would be available after disasters. However, ARP staff said that all of these operations risk issues would be addressed as part of future reviews. Although SEC’s voluntary ARP program provides some assurance that securities markets are being operated soundly, some of the organizations subject to ARP have not taken action on some important recommendations. Since its inception, ARP program staff recommendations have prompted numerous improvements in the operations of exchanges, clearing organizations, and ECNs. ARP staff also reviewed exchange and clearing organization readiness for the Year 2000 date change and decimal trading, and market participants implemented both industrywide initiatives successfully. However, because the ARP program was not implemented under SEC’s rulemaking authority, compliance with the ARP guidance is voluntary. Although SEC staff said that they were satisfied with the cooperation they received from the organizations covered by the ARP program, in some cases, organizations did not take actions to correct significant weaknesses ARP staff identified. For example, as we reported in 2001, three organizations had not established backup facilities, which SEC ARP staff had raised as significant weaknesses. Our report noted, “Securities trading in the United States could be severely limited if a terrorist attack or a natural disaster damaged one of these exchange’s trading floor.” In addition, for years, SEC’s ARP staff raised concerns and made recommendations relating to inadequacies in NASDAQ’s capacity planning efforts, and NASDAQ’s weaknesses in this area delayed the entire industry’s transition to decimal pricing for several months. NASDAQ staff told us they have implemented systems with sufficient capacity, and SEC staff said they are continuing to monitor the performance of these systems. We also reported that exchanges and clearing organizations sometimes failed to submit notifications to SEC regarding systems changes and outages as expected under the ARP policy statement, and we again saw this issue being cited in 2 of 10 recent ARP examination reports we reviewed. ARP staff continue to find significant operational weaknesses at the organizations they oversee. In the 10 examinations we reviewed, SEC staff found weaknesses at all 9 organizations and made 74 recommendations for improvement. We compared these weaknesses to the operational elements we used in our analysis of financial market organizations (as discussed in ch. 3 of this report). Our analysis showed that the ARP staff made at least 22 recommendations to address significant weaknesses in the 9 organizations’ physical or information system security or business continuity planning efforts—including 10 recommendations to address significant weaknesses at organizations critical to the functioning of the markets. For example, in an examination conducted in 2000, ARP staff found that personnel at one exchange did not have consistent information system security practices across the organization and lacked a centrally administered, consolidated information system security policy. In addition, although SEC recommends that organizations subject to ARP have vulnerability assessments performed on their information systems, ARP staff found that this exchange had not assessed its information systems. In three other reviews, the ARP staff found that the organizations had not complied with ARP policy expectations to fully test their contingency plans. ARP staff noted other significant weaknesses, including inadequate BCPs or backup facilities. ARP staff said that they considered all the recommendations they make to be significant, including the 74 recommendations made in these 10 reports. These recommendations will remain open until the next time the ARP staff review the organization and can assess whether they have been acted upon. Because the ARP program was established through a policy statement and compliance is voluntary, SEC lacks specific rules that it can use to gain improved responsiveness to recommendations to the exchanges and clearing organizations subject to APP. SEC staff explained that they chose not to use a rule to implement ARP because rules can become obsolete and having voluntary guidance provides them with flexibility. SEC staff also told us that an organization’s failure to follow ARP expectations could represent a violation of the general requirement that exchanges maintain the ability to operate, and therefore they could take action under that authority. However, they noted that the use of such authority is rare. However, SEC has issued a rule requiring the most active ECNs to comply with all the ARP program’s standards. In 1998, SEC issued a regulation that subjected alternative trading systems such as ECNs to increased regulatory scrutiny because of their increasing importance to U.S. securities markets. Included in this regulation was a rule that required ECNs whose trading volumes exceeded certain thresholds to comply with the same practices as those contained in the ARP policy statements. In its explanation of the regulation, SEC noted that its ARP guidelines are intended to ensure that short-term cost cutting by registered exchanges does not jeopardize the operation of the securities markets, and therefore it was extending these requirements to the ECNs because of their potential to disrupt the securities markets. We previously recommended that SEC develop formal criteria for assessing exchange and clearing organization cooperation with the ARP program and perform an assessment to determine whether the voluntary status of the ARP program is appropriate. Although they were generally satisfied with the level of cooperation, SEC staff told us that they were reviewing the extent to which exchanges and clearing organizations complied with the ARP program and planned to submit the analysis to SEC commissioners in 2003. In addition to possibly changing the status of the program for the 22 exchanges and clearing organizations subject to ARP, SEC staff also told us that they were considering the need to extend the ARP program to those broker-dealers for whom it would be appropriate to adopt the sound business continuity practices that will result from the joint regulatory white paper. Limited resources and challenges in retaining experienced ARP staff have affected SEC’s ability to oversee an increasing number of organizations and more technically complex market operations. Along with industrywide initiatives discussed earlier, ARP staff workload has expanded to cover 32 organizations with more complex technology and communications networks. However, SEC has problems retaining qualified staff, and market participants have raised concerns about the experience and expertise of ARP staff. As SEC has experienced considerable staff losses overall, the ARP program also has had high turnover. As of October 2002, ARP had 10 staff, but SEC staff told us that staff levels had fluctuated and had been as low as 4 in some years. As a result, some ARP program staff had limited experience, with 4 of the 10 current staff having less than 3.5 years’ experience, including 3 with less than 2 years’ experience. During our work on SEC resource issues in 2001, market participants and former SEC staff raised concerns that the level of resources and staff expertise SEC has committed to review technology issues is inadequate to address complex market participant operations. For example, officials from several market participants we interviewed in 2001 told us that high turnover resulted in inexperienced SEC staff, who lacked in-depth knowledge, doing reviews of their organizations. SEC staff told us that they continue to emphasize training for their staff to ensure that they have the proper expertise to conduct effective reviews. Resource limitations also affect the frequency of ARP reviews. With current staffing levels, SEC staff said that they are able to conduct examinations of only about 7 of the 32 organizations they oversee as part of the ARP program each year. Although standards for federal organizations’ information systems require security reviews to be performed at least once every 3 years, these standards recommend that reviews of high-risk systems or those undergoing significant systems modifications be done more frequently. Although our analysis of SEC ARP examination data found that SEC had conducted recent reviews of almost all the organizations we considered critical to the financial markets, long periods of time often elapsed between ARP examinations of these organizations. Between September 1999 and September 2002, SEC examined 6 of the 7 critical organizations under its purview. However, as shown in figure 12, the intervals between the most recent examinations exceeded 3 years for 5 of the 7 critical organizations, including an organization that was not reviewed during this period. Our analysis of ARP report data showed that the intervals between reviews of critical organizations averaged 39 months, with the shortest interval being 12 months and the longest 72 months. Since September 1999, the SEC ARP staff had reviewed 7 of the 8 less critical exchanges, clearing organizations, and ECNs that we visited during this review. However, SEC staff told us that the ARP program also may be tasked with reviewing the extent to which broker-dealers important to clearing and trading in U.S. securities markets are adhering to sound business continuity practices. Such an expansion in the ARP program staff’s workload would likely further reduce the ability of the SEC staff to frequently review all the important organizations under its authority. The potential increase in SEC’s appropriations could provide the agency an opportunity to increase the level and quality of the resources it has committed to the ARP program. The Sarbanes-Oxley Act of 2002, which mandated various accounting reforms, also authorized increased appropriations for SEC for fiscal year 2003. Specifically, the act authorized $776 million in 2003, an increase of about 51 percent over the nearly $514 million SEC received for fiscal year 2002. The act directs SEC to devote $103 million of the newly authorized amount to personnel and $108 million to information technology. If appropriated, these additional funds could allow SEC to increase resources devoted to the ARP program. Increased staffing levels also could allow SEC to conduct more frequent examinations and better ensure that significant weaknesses are identified and addressed in a timely manner. The additional resources could also be used to increase the technical expertise of its staff, further enhancing SEC’s ability to review complex information technology issues. SEC and the securities market SROs generally have not examined broker- dealers’ physical and information system security and business continuity efforts, but planned to increase their focus on these issues in the future. SEC’s Office of Compliance Inspections and Examinations (OCIE) examines broker-dealers, mutual funds, and other securities market participants. However, for the most part, OCIE examinations focus on broker-dealers’ compliance with the securities laws and not on physical and electronic security and business continuity, which these laws do not generally address. After some broker-dealers that specialized in on-line trading experienced systems outages, OCIE staff told us that they began addressing information system capacity, security, and contingency capabilities at these firms. SEC predicated its reviews of these issues on the fact that these firms, as a condition of conducting a securities business, would need to have sufficient operational capacity to enter, execute, and settle orders, and deliver funds and securities promptly and accurately. In addition, the Gramm-Leach-Bliley Act (GLBA) required SEC to establish standards for the entities it oversees to safeguard the privacy and integrity of customer information and prevent unauthorized disclosure. As a result, in some reviews done since July 2001, OCIE staff discussed the controls and policies that firms have implemented to protect customer information from unauthorized access. However, SEC OCIE staff acknowledged that their expertise in these areas is limited. OCIE staff told us that few of the approximately 600 examiners they employ had information technology backgrounds. During the work we conducted for our report on SEC’s staffing and workload, staff at several broker-dealers told us that the SEC staff that review their firms lacked adequate technology expertise. SROs also generally have not addressed these issues at broker-dealers. Under U.S. securities laws, exchanges acting as SROs have direct responsibility for overseeing their broker-dealer members. NYSE and NASD together oversee the majority of broker-dealers in the United States. According to officials at these two SROs, staff as often as annually conduct examinations to review adherence with capital requirements and other securities regulations. However, staff at both organizations acknowledged that, in the past, their oversight generally did not focus on how members conducted their operations from physical or information systems security or business continuity perspectives. Representatives of the SROs told us they plan to include aspects of these issues in future reviews. For example, they plan to examine their members’ information system security to ensure compliance with GLBA customer information protection provisions. NYSE and NASD plan to focus on business continuity issues in future reviews because, in August 2002, both submitted similar rules for SEC approval that will require all of their members to establish BCPs. The areas the plans are to address include the following: backup for books and records, procedures for resuming operations of critical systems, alternate means for communicating with the members’ staff and their regulatory reporting and communications with regulators. NYSE and NASD officials told us that once these rules were adopted, their staff would include these matters in the scope of their examinations after allowing sufficient time for firms to develop the required BCPs. As part of their mandate to oversee banks’ safety and soundness, the banking regulators, including the Federal Reserve and OCC, issued guidance that directs depository institutions or banks to address potential operations risks with physical and information system security and business continuity measures. The guidance includes recommended steps that banks should take to reduce the risk of operations disruptions from physical or electronic attacks and for recovering from such events with business continuity capabilities. For example, in 1996 these regulators jointly issued a handbook on information systems, which calls for banks to conduct an analysis of their risks and implement measures to reduce them. Banks were also to have access controls for their systems and programs. Regarding physical security, the banking regulators expect banks to ensure the safety of assets and to physically protect data centers used for information systems processing. For example, the Federal Reserve’s guidance directs banks to take security steps to protect cash and vaults and ensure that bank facilities are protected from theft. The banking regulators’ joint 1996 handbook discussed measures to secure data centers and information system assets. However, the bank regulators’ guidance did not specifically address measures to protect facilities from terrorist or other physical attacks. Regarding business continuity, the joint handbook expects banks to have plans addressing all critical services and operations necessary to minimize disruptions in service and financial losses and ensure timely resumption of operations in a disaster. Banks also were to identify the critical components of their telecommunications networks and assess whether they were subject to single points of failure that could occur, for example, by having all lines routed to a single central switching office, and to identify alternate routes and implement redundancy. The Federal Reserve and OCC, in conjunction with the other depository regulators, are also developing expanded guidance on physical and electronic security and business continuity planning. They are planning to issue separate handbooks on information system security and business continuity in early 2003. Bank regulatory staff provided us with a draft of the information system security guidance, which expects banks to have programs that include security policies, access controls, and intrusion monitoring; vulnerability assessments; and incident response capabilities. The draft guidance also covers physical security from an overall facility perspective and suggests that banks use appropriate controls to restrict or prevent unauthorized access and prevent damage from environmental contaminants. Banks will also be instructed to assess their exposure risks for fire and water damage, explosives, or other threats arising from location, building configuration, or neighboring entities. According to bank regulatory staff, they are also currently drafting a separate guidance handbook addressing business continuity issues. Bank regulators reported regularly examining how banks are addressing physical and information system security and business continuity issues. The Federal Reserve and OCC oversee over 3,100 institutions combined, including the largest U.S. banks, and are required to examine most institutions annually. At the end of fiscal year 2002, the Federal Reserve had over 1,200 examiners and OCC over 1,700. As part of these staff, the agencies each had between 70 and 110 examiners that specialized in reviewing information systems issues. Using a risk-based approach, these regulators’ examiners tailor their examinations to the institution’s unique risk profile. As a result, some areas would receive attention every year, but others would be examined only periodically. Staff at the Federal Reserve and OCC told us that their examiners consider how their institutions are managing operations risks and review these when appropriate. For example, Federal Reserve staff told us that under their risk-based examination approach, information security is considered as part of each examination, particularly since regulations implementing section 501(b) of GLBA require that the regulators assess how financial institutions protect customer information. They said that the extent to which information security is reviewed at each institution can vary, with less detailed reviews generally done at institutions not heavily reliant on information technology. They also said that business recovery issues were addressed in most examinations. Both Federal Reserve and OCC staff told us that physical security was considered as part of information security in reviewing protections at data centers. Both regulators also expect banks’ internal auditors to review physical security for vault and facilities protection. However, the focus of these reviews has not generally been on the extent to which banks are protected from terrorist or other physical attacks. In light of the September 2001 attacks, these regulators stated that their scrutiny of physical and information system security and business continuity policies and procedures would be reviewed even more extensively in future examinations. Because we did not review bank examinations as part of our review, we were unable to independently determine how often and how extensively these two bank regulatory agencies reviewed information security and business continuity at the entities they oversee. Financial market regulators have begun to develop goals and a strategy for resuming operations along with sound business continuity practices for a limited number of organizations that conduct clearing functions. The business continuity practices that result from this effort will likely address several important areas, including geographic separation between primary and backup locations and the need to ensure that organizations have provisions for separate staff and telecommunications services needed to conduct critical operations at backup locations. If successfully implemented, these sound practices should better ensure that clearing in critical U.S. financial markets could resume and settlement would be completed after a disaster, potentially avoiding a harmful systemic crisis. However, trading on the markets for corporate securities, government securities, and money market instruments is also vitally important to the economy, and the United States deserves similar assurance that trading activities would also be able to resume when appropriate and without excessive delay. The U.S. economy has demonstrated that it can withstand short periods during which markets are not trading. After some events occur, having markets closed for some time could be appropriate to allow for disaster recovery and reduce market overreaction. However, long delays in reopening the markets could also be harmful to the economy. Without trading, investors lack the ability to accurately value their securities and would be unable to adjust their holdings. The attacks demonstrated that the ability of markets to recover could depend on the extent to which market participants have made sound investments in business continuity capabilities. Without identifying strategies for recovery, determining the sound practices needed to implement these strategies, and identifying the organizations that would conduct trading under these strategies, the risk that markets may not be able to resume trading in a fair and orderly fashion and without excessive delays is increased. Goals and strategies for recovering trading activities could be based on likely disaster scenarios that identify the organizations that could be used to conduct trading in the event that other organizations were unable to recover within a reasonable time. These would provide market participants with information to make better decisions about how to improve their operations and provide regulators with sound criteria for ensuring that trading on U.S. markets could resume when appropriate. Strategies for resuming trading could involve identifying which markets would assume the trading activities of others or identifying other venues such as ECNs in which trading could occur. To be viable, these strategies would also have to identify whether any operational changes at these organizations would be necessary to allow this trading to occur. Although SEC has begun efforts to ensure that trading can be transferred between NYSE and NASDAQ, these efforts are not complete and not all securities are covered. Because of the risk of operational difficulties resulting from large-scale transfers of securities trading to organizations that normally do not conduct such activities, testing the various scenarios would likely reduce such problems and ensure that the envisioned strategies are viable. Expanding the organizations that would be required to implement sound business continuity practices beyond those important for clearing would better ensure that those organizations needed for the resumption of smooth and timely trading would have developed the necessary business continuity capabilities. As discussed in chapter 3, exchanges, clearing organizations, and ECNs we reviewed had taken many steps to reduce the risks that they would be disrupted by physical or electronic attacks and have mitigated risk through business continuity planning. However, some organizations still had limitations in their business continuity measures that increased the risk that their operations would be disrupted, including organizations that might need to trade if the major exchanges were unable to resume operations. In addition, the attacks demonstrated that organizations that were not previously considered critical to the markets’ functioning could greatly increase in importance following a disaster. Therefore, identifying all potential organizations that could become important to resuming trading and ensuring they implement sound business practices would increase the likelihood of U.S financial markets being able to recover from future disasters. Given that the importance of different organizations to the overall markets varies, any recovery goals and business continuity practices that are developed could similarly vary their expectations for different market participants but with the ultimate goal of better ensuring that organizations take reasonable, prudent steps in advance of any future disasters. For example, broker-dealers could be expected to take steps to ensure that their customer records are backed up frequently and that these backup records are maintained at considerable distance from the firms’ primary sites. This would allow customers to transfer their accounts to other broker-dealers if the firm through which they usually conduct trading is not operational after a major disaster. Given the increased threats demonstrated by the September 11 attacks and the need to ensure that key financial market organizations are following sound practices, securities and banking regulators’ oversight programs are important mechanisms for ensuring that U.S financial markets are resilient. However, SEC’s ARP program—which oversees the key clearing organizations and exchanges and may be used to oversee additional organizations’ adherence to the white paper on sound practices—currently faces several limitations. Because it is a voluntary program, SEC lacks leverage to assure that market participants implement important recommended improvements. An ARP program that draws its authority from an issued rule could provide SEC additional assurance that exchanges and clearing organizations adhere to important ARP recommendations and any new guidance developed jointly with other regulators. To preserve the flexibility that SEC staff see as a strength of the current ARP program, the rule would not have to mandate specific actions but could instead require that the exchanges and clearing organizations engage in activities consistent with the practices and tenets of the ARP policy statements. This would provide SEC staff with the ability to adjust their expectations for the organizations subject to ARP as technology and industry best practices evolve while providing clear regulatory authority to require prudent actions when necessary. SEC already requires ECNs to comply with ARP guidance; extending the rule to the exchanges and clearing organizations would place them on similar legal footing. Additional staff, including those with technology backgrounds, could better ensure the effectiveness of the ARP program’s oversight. SEC could conduct more frequent examinations, as envisioned by federal information technology standards, and more effectively review complex, large-scale technology operations in place at the exchanges, ECNs, and clearing organizations. If the ARP program must also begin reviewing the extent to which broker-dealers important to clearing and trading in U.S. securities markets are adhering to sound business continuity practices, additional staff resources would likely be necessary to prevent further erosion in the ability of the SEC staff to oversee all the important organizations under its authority. The increased appropriations authorized in the Sarbanes-Oxley Act, if received, would present SEC a clear opportunity to enhance its technological resources, including the ARP program, without affecting other important initiatives. So that trading in U.S. financial markets can resume after future disruptions in as timely a manner as appropriate, we recommend that the Chairman, SEC, work with industry to develop goals and strategies to resume trading in securities; determine sound business continuity practices that organizations would need to implement to meet these goals; identify the organizations, including broker-dealers, that would likely need to operate for the markets to resume trading and ensure that these entities implement sound business continuity practices that at a minimum allow investors to readily access their cash and securities; and test trading resumption strategies to better assure their success. In addition, to improve the effectiveness of the SEC’s ARP program and the preparedness of securities trading and clearing organizations for future disasters, we recommend that the Chairman, SEC, take the following actions: Issue a rule requiring that the exchanges and clearing organizations engage in activities consistent with the operational practices and other tenets of the ARP program; and If sufficient funding is available, expand the level of staffing and resources committed to the ARP program. We requested comments on a draft of this report from the heads, or their designees, of the Federal Reserve, OCC, Treasury, and SEC. The Federal Reserve and SEC provided written comments, which appear in appendixes III and IV, respectively. The Federal Reserve, OCC, and SEC also provided technical comments, which we incorporated as appropriate. SEC generally agreed with the report and the goals of its recommendations. The letter from SEC’s Market Regulation Division Director noted that SEC has been working with market participants to strengthen their resiliency and that the SEC staff agreed that the financial markets should be prepared to resume trading in a timely, fair, and orderly fashion following a catastrophe, which is the goal of our recommendations that SEC work with the industry to develop business continuity goals, strategies, and practices. SEC’s letter expressed a concern that this recommendation expects SEC to ensure that broker-dealers implement business continuity practices that would allow trading activities to resume after a disaster. The SEC staff noted that broker-dealers are not required to conduct trading or provide liquidity to markets. Instead this would be a business decision on the part of these firms. However, SEC’s letter noted that broker-dealers are required to be able to ensure that any completed trades are cleared and settled and that customers have access to the funds and securities in their accounts as soon as is physically possible. SEC’s letter stated that the BCP expectations for these firms must reflect these considerations. We agree with SEC that the business continuity practices they develop with broker-dealers should reflect that the extent to which these firms’ BCPs address trading activities is a business decision on the part of a firm’s management. In addition, SEC would need to take into account the business continuity capabilities implemented by broker-dealers that normally provide significant order flow and liquidity to the markets when it works with the exchanges and other market participants to develop goals and strategies for recovering from various disaster scenarios. To the extent that many of these major broker-dealers may be unable to conduct their normal volume trading in the event of some potential disasters without extended delays, the intent of our recommendation is that SEC develop strategies that would allow U.S. securities markets to resume trading, when appropriate, through other broker-dealers such as regional firms that are less affected by the disaster. However, to ensure that such trading is orderly and fair to all investors, SEC will have to ensure that broker-dealers’ business continuity measures at a minimum are adequate to allow prompt transfers of customer funds and securities to other firms so that the customers of firms unable to resume trading are not disadvantaged. Regarding our recommendations to ensure that SEC’s ARP program has sufficient legal authority and resources to be an effective oversight mechanism over exchanges, clearing organizations, and ECNs, SEC’s Market Regulation Division Director stated that they will continue to assess whether rulemaking is appropriate. In addition, the letter stated that, if the agency receives additional funding, they will consider recommending to the Chairman that ARP staffing and resources be increased. SEC’s letter also commented that physical security beyond the protection of information technology resources was not envisioned as a component of ARP when the program was initiated. They indicated that they may need additional resources and expertise to broaden their examinations to include more on this issue. In the letter from the Federal Reserve’s Staff Director for Management, he noted that the Federal Reserve is working to improve the resilience of the financial system by cooperating with banking and securities regulators to develop sound practices to reduce the system effects of wide-scale disruptions. They are also working with the other banking regulators to expand the guidance for banks on information security and business continuity.
September 11 exposed the vulnerability of U.S. financial markets to wide-scale disasters. Because the markets are vital to the nation's economy, GAO assessed (1) the effects of the attacks on market participants' facilities and telecommunications and how prepared participants were for attacks at that time, (2) physical and information security and business continuity plans market participants had in place after the attacks, and (3) regulatory efforts to improve preparedness and oversight of market participants' risk reduction efforts. The September 11 attacks severely disrupted U.S. financial markets, resulting in the longest closure of the stock markets since the 1930s and severe settlement difficulties in the government securities market. While exchange and clearing organization facilities were largely undamaged, critical broker-dealers and bank participants had facilities and telecommunications connections damaged or destroyed. These firms and infrastructure providers made heroic and sometimes ad hoc and innovative efforts to restore operations. However, the attacks revealed that many of these organizations' business continuity plans (BCP) had not been designed to address wide-scale events. GAO reviewed 15 organizations that perform trading or clearing and found that since the attacks, these organizations had improved their physical and information security measures and BCPs to reduce the risk of disruption from future attacks. However, many of the organizations still had limitations in their preparedness that increased their risk of being disrupted. For example, 9 organizations had not developed BCP procedures to ensure that staff capable of conducting their critical operations would be available if an attack incapacitated personnel at their primary sites. Ten were also at greater risk for being disrupted by wide-scale events because 4 organizations had no backup facilities and 6 had facilities located between 2 to 10 miles from their primary sites. The financial regulators have begun to jointly develop recovery goals and business continuity practices for organizations important for clearing; however, regulators have not developed strategies and practices for exchanges, key broker-dealers, and banks to ensure that trading can resume promptly in future disasters. Individually, SEC has reviewed exchange and clearing organization risk reduction efforts, but had not generally reviewed broker-dealers' efforts. The bank regulators that oversee the major banks had guidance on information security and business continuity and reported examining banks' risk reduction measures annually.
The United States is home to more immigrants than any other country in the world. Census estimated that 41 million foreign-born individuals resided in the United States from 2010 through 2014, making up 13 percent of the population. According to the World Bank, the United States is also, by far, the largest source of remittances from foreign-born residents to their home countries, including Mexico, China, India, and the Philippines, among others (see fig. 1). Remittance funds can be used for basic consumption, housing, education, and small business formation and can promote financial development in cash-based economies. In a number of developing economies, remittances have become an important and stable source of funds that exceeds revenues from exports of goods and services and financial inflows from foreign direct investment. Remittances can be sent through formal transfer systems and informal methods. Formal systems typically include banks, credit unions, money transfer businesses such as wire services, and postal services. In the United States, providers of remittance transfer services (including bank and nonbank institutions) are subject to federal oversight and, depending on the state in which they operate, can be subject to supervision by states. According to CFPB, nonbank remittance transfer providers sent an estimated 150 million individual transfers from the United States in 2012. Informal remittance transfer methods include hand-carried cash, courier services, and agents known as hawalas. Individuals can transfer remittance funds in several ways, such as 1. cash payments to individuals and bank accounts; 2. prepaid debit or credit cards; and 3. online and through mobile devices. Global remittance estimates are published annually by some international organizations on an annual basis. IMF collects data on components of remittances submitted by its member countries, including the United States, as part of its annual publication of balance of payments statistics. IMF’s Balance of Payments and International Investment Position Manual provides a framework for identifying individual remittance flows that benefit households. According to IMF, this framework can be applied by all countries and should lead to some level of comparability among them. The World Bank uses IMF statistics to produce an annual Migration and Remittances Factbook and monthly and annual remittances data on its website. Other international organizations, such as the IDB through the Multilateral Investment Fund, also produce annual reports on remittance estimates. In the United States, BEA is responsible for compiling the official U.S. estimates. Other nations may delegate the official estimation of remittances to central banks or specific government agencies. In response to requests from policymakers, remittance data compilers, and other data users, IMF and the World Bank published a guide for compilers and users of remittances data. The purpose of the guide is to promote lasting improvements in remittances data, which it seeks to accomplish by summarizing the definitions and concepts related to the balance of payments framework and by providing practical compilation guidance. Two items in the guide that substantially relate to remittances are “personal transfers” and “compensation of employees,” both of which countries are required to report to IMF. Personal transfers are a measure of all transfers in cash or in kind made or received by resident households to or from nonresident individuals and households. Compensation of employees is a measure of the income of short-term workers in an economy where they are not resident and of the income of resident workers who are employed by a nonresident entity. The guide also defines additional measures related to remittances, which countries are encouraged but not required to report. For example, personal remittances represent the sum of personal transfers, net compensation of employees, and capital transfers between households, according to the guide. Institutions use different methodologies to produce estimates of remittances. For example, BEA uses demographic and household survey data and a model that calculates the remittance rates by demographic group to create the official estimate of remittances from the United States. The World Bank has developed its own methodology to create remittance estimates. Its research group produces country-specific development indicators and international development statistics. The World Bank then complements these data with information from the IMF’s Balance of Payments and International Investment Position Manual to create annual and semi-annual remittance estimates. Since 2010, researchers at the World Bank have also used United Nations population data to develop a bilateral migration matrix, which provides a second set of country-specific bilateral remittance estimates— that is, estimates between sending and receiving countries. These estimates are based on the number of migrants in different destination countries and estimates of how changes in the income of migrants influence the remittances they send. IDB’s Multilateral Investment Fund has a different methodology, using estimates reported by central banks to IMF as a baseline for individual country estimates. The Multilateral Investment Fund then works with the Center for Latin American Monetary Studies to help refine remittance estimates for selected countries in the Latin America and Caribbean region. Finally, some central banks use a combination of methods to estimate remittances. The central bank of Mexico, known as Banco de México, tracks remittance flows to Mexico with the help of regulatory reporting requirements on money transmitters. Since 2003, Mexico’s methodology for estimating remittances has required firms that receive remittances to report, on a monthly basis, the amount of money received and the number of transactions conducted between the United States and Mexico. To track remittances through informal channels, such as couriers that fall outside this regulatory framework, Banco de México conducts a survey at the U.S.-Mexico border of Mexicans entering the country. The central bank of the Philippines, known as Bangko Sentral Ng Pilipinas, estimates remittances that are channeled through banks. The Philippine government also has established a formal program for registering and tracking overseas Filipino workers. This program provides data to the government on the type of employment these workers obtain as well as their salaries. The Bangko Sentral Ng Pilipinas also uses the Survey of Overseas Filipinos to supplement data from the program. Using these two approaches, Bangko Sentral Ng Pilipinas is able to identify remittance funds sent by Filipinos overseas through friends and relatives and amounts brought in when these workers return home. A proposed fine on immigrants unable to show proof of legal status who send money through remittance transfer providers covered under EFTA could raise money for border protection, but the potential amount of revenue to be generated is unknown. Net revenue from the fine—the total of all fines collected less CFPB’s administrative and enforcement costs— would depend on several key factors, namely the dollar amount of remittances sent by those without legal immigration status, changes in remitter behavior because of the fine, including a potential reduction in remittances through regulated providers, and the cost of enforcement. For example, the ability to raise money depends on a significant number of individuals without legal status using regulated remittance transfer providers and paying the fine. However, a fine could result in a decrease in remittances in the regulated market and an increase in remittances through informal methods of money transfer. The revenue raised by the proposed fine would first be used to pay CFPB for enforcement costs. We did not identify any estimates of the administrative and enforcement costs associated with the fine. Our hypothetical scenario analysis illustrates the sensitivity of potential net revenue estimates to these factors. CFPB and other federal regulators would enforce the requirements of the proposed legislation and CFPB identified some implementation challenges. Lastly, providers told us the fine could have consequences for them, and one provider said that smaller providers would likely be affected the most. A fine could potentially generate net revenue for border control, but the following selected factors would influence the actual amount: The dollar amount of remittances sent by those without legal immigration status. The revenue raised by the fine would depend on the dollar amount of remittances sent by those individuals in the United States without legal immigration status and, specifically by those using regulated remittance transfer providers. According to three studies we identified during discussions with experts, estimates of unauthorized U.S. immigrants in 2012 ranged from 11.1 million to 11.4 million people. Of that number, only those who conduct transactions through providers that are subject to EFTA would actually pay a fine, should they continue to use such providers, and that number is unknown. The response to the fine by individuals in the United States without legal status, including a reduction in remittances through regulated providers. If individuals without legal status respond to the fine by making money transfers that may not be subject to EFTA requirements, by remitting less, or leveraging connections with immigrants with legal status, the amount of revenue raised by the fine would be lower. Representatives from almost all of the organizations we spoke with, including providers, researchers, federal agencies, and community groups, stated that remitters without legal status may be deterred by the fine and the additional scrutiny around their immigration status. The amount of revenue generated by the fine would also depend on the extent to which those without legal immigration status continue to use regulated systems after the fine is imposed instead of switching to informal methods, such as hawalas. Two articles identified in our literature search noted that those without legal status may use methods that allow them to maintain a higher degree of anonymity. For example, those without legal immigration status may have relatives or friends who are authorized to be in the United States send remittances for them, potentially lowering the amount of revenue raised by a fine. Conversely, if most remitters continue to remit the same amount, and continue to remit through regulated channels, the total amount remitted may remain stable, and more revenue will be raised. Research experts, officials from industry, community groups, and some federal agencies with which we spoke suggested that some remitters unable to provide proof of legal status may send the remittance and pay the fine, but the exact percentage is unknown. While the effect of the fine depends heavily on the remitting behavior of individuals without legal immigration status and their response to the fine, limited information exists on how many of these individuals remit or the extent to which they rely on regulated methods. In the absence of definitive studies on remitting behavior, the extent to which immigrants use regulated or informal methods for remitting and how they will respond to price increases is unknown. If the costs of the fine, including costs for providers to implement the requirements of the proposed legislation, are passed on to the remitter in the form of a price increase, remitters might reduce the amounts or the frequency with which they remit. Information on price sensitivity—how senders respond to an increase in price—is limited. According to a CFPB report and a remittance transfer provider with whom we spoke, remitters’ response to higher prices may partly depend on knowledge of other available options, including access to information about fees charged by other providers. Behavioral changes could substantially limit the amount of revenue generated for border protection. Administrative and enforcement costs associated with the fine. Although the regulatory costs associated with the proposed legislation are unknown, CFPB officials told us that the agency would incur expenses associated with implementing the legislation and ensuring compliance. According to CFPB, these expenses would include the costs of developing rules, examining remittance transfer providers, and cooperating with other federal agencies on enforcement actions against noncompliant institutions. Other federal regulators also enforce EFTA for their regulated entities, and state regulators also may play a role in oversight of remittance transfer providers. As the revenue for the fine would be used first to reimburse CFPB for administrative and enforcement costs to carry out the proposed legislation, high costs to CFPB for these activities would mean less net revenue available for border protection. Uncertainty in these costs would contribute to uncertainty in how much revenue remains for border protection. Given the uncertainty related to these important factors, we constructed a scenario analysis to illustrate how the revenue generated for border protection could vary based on the values we assume for the following, given our starting assumptions about the total volume of remittances and proportion of those in the formal sector: 1. the dollar amount of remittances sent by immigrants without legal 2. the reduction in remittances through regulated providers in response to the request to show proof of legal status or pay a fine, and 3. the magnitude of administrative and enforcement costs to CFPB. The scenarios are hypothetical because the factors used to generate the results were selected solely to demonstrate the uncertainty in how much revenue would be collected. They are not supported by empirical research or evidence. The selected scenarios we illustrate are from a larger number that we analyzed to examine how sensitive net revenue from fines is to the factors. The three factors shown in figure 2 illustrate the potentially wide variation in net revenue from fines. In our analysis, we begin by assuming that the total volume of remittances is $50 billion and 50 percent of the total volume of remittances is sent through regulated providers. The scenario analysis varies the three factors above, thereby demonstrating the breadth of uncertainty in potential net revenue. As figure 2 demonstrates, when the factors vary potential net revenue from fines can change significantly. For example, one scenario with no change in the amount of remittances and low administrative and enforcement costs could provide $0.41 billion in potential net revenue for border protection. In contrast, another scenario with a 75 percent reduction in remittances after the fine and high administrative and enforcement costs would generate potential net revenue of only $0.01 billion. In some cases, the cost incurred by CFPB could be more than the revenue from the fine. For example, a small dollar amount of remittances sent by immigrants without legal status, large reductions in remittances, and high administrative and enforcement costs could lead to negative net revenue. Obtaining reasonable estimates of net revenue would depend upon having accurate, reliable, and complete information on the amount immigrants without legal status remit and their response to a requirement for providers to request proof of legal status or assess a fine, as well as administrative and enforcement costs. In the absence of such information, the potential net revenue a fine would generate is unknown. Officials from CFPB noted that in addition to creating uncertainty about administrative and enforcement costs, the proposed legislation, if passed, would require CFPB to address other issues, including issuing new rules to define what constitutes proof of legal status and to establish procedures for submitting fines, as well as coordinating with other regulators. As noted earlier, CFPB would be required to define by rule what constitutes acceptable documentation in states that do not require proof of legal status to obtain a state-issued driver’s license or a federal passport. CFPB would need to coordinate with other financial regulators. For example, the proposed legislation calls for remittance transfer providers to submit the fines to CFPB and for CFPB to then transfer to Treasury any remaining funds after the payment of CFPB’s administrative and enforcement costs. However as noted previously, other federal regulators have the authority to enforce EFTA for the entities they supervise, including enforcing the remittance provisions against those supervised entities that are remittance transfer providers under the act. The proposed legislation would provide CFPB with rulemaking authority, but does not state how CFPB would coordinate with other agencies. CFPB staff told us that it might need to develop procedures with others on examination and enforcement efforts. CFPB would be required to issue rules establishing the form and manner in which fines would be submitted to CFPB. CFPB staff told us that CFPB does not currently levy fines on consumers. Instead, CFPB levies monetary sanctions and brings other enforcement actions against consumer finance businesses and other persons in connection with violations of Federal consumer financial law. But CFPB staff noted that collecting fines directly from institutions for noncompliance is different from a fine on remitters collected by remittance transfer providers that is then submitted to the agency. Finally, CFPB may have examination authority over nonbank remittance transfer providers that also may be overseen by state regulators. If the proposed legislation were to become law, CFPB might have to coordinate with state regulators. If remittances decrease because the number of transactions or amounts remitted decline, the fee revenue associated with remittance transactions that providers receive would decrease. Without any corresponding reduction in cost, the decrease in remittances might decrease profits for some providers, but by how much is uncertain. Prior experience with legislation passed in Oklahoma in 2009 may demonstrate effects similar to those that could result from the proposed legislation though there are some key differences between the two. The Oklahoma law imposed a $5 fee on each wire transfer from a nondepository institution, and 1 percent of the amount of the transaction, if any, in excess of $500. When making a transfer, all persons regardless of immigration status were required to pay the fee. Under the Oklahoma law, customers who paid the fee are entitled to an income tax credit equal to the amount paid when filing individual income taxes in Oklahoma with either a valid Social Security number or a valid taxpayer identification number. The tax credit in effect means that those customers without a Social Security or taxpayer identification number are not eligible for the state income tax credit and therefore will have paid the remittance fee without being able to obtain a credit or refund. Statements of four remittance transfer providers with operations in Oklahoma suggest that the law has had mixed effects. According to two providers, revenues decreased once the law was in place. Two providers told us that transaction activity in the state had fallen. One other provider stated that their company had still not recovered from the decline in revenue. This provider told us that the decreased number of transactions was the result of remittances that moved to out of state providers or from regulated to informal channels. The other two providers we interviewed noticed decreases in remittances, although they noted they did not have a large presence in Oklahoma. Also, one official from a state audit association noted that fee revenues for the State of Oklahoma continued to increase after the first year of the imposition of the fee. Remittance transfer providers, industry associations, research experts, and some federal agencies we met with said that they expected to see revenues decrease in the regulated market if the proposed law (S.79) were passed, as it would send remittances to the informal market. New proof-of-legal status requirements and fine collection could also increase remittance transfer providers’ costs. Such potential costs were noted by almost all providers, and representatives from industry associations we spoke to. Several providers noted that they might need to pay for new computer infrastructure and databases, staff training, and compliance. One provider pointed out that just to add a new variable listing information on customers to an existing information system was a 9-month process that would involve testing and validation. Representatives of an industry association and one remittance transfer provider cited potential costs related to maintaining databases used to verify legal status. Remittance transfer providers could also face increased compliance costs related to new requirements. In some cases, providers told us that compliance costs could be significant. For example, some providers said that they had made significant investments to comply with the fee and exchange rate disclosures and other requirements implemented through amendments to Regulation E after the passage of the Dodd-Frank Act, such as developing procedures to electronically disclose the fees charged by the provider. Another provider said that it had spent more than $3 million on technology enhancements and customer service teams to satisfy the requirements of the rule. Still another provider noted that the company spent about 3 percent to 4 percent of its revenue on the legal compliance budget. One representative of a transfer provider whom we interviewed said that the company might be able to incorporate compliance requirements into its Bank Secrecy Act (BSA)/anti-money-laundering (AML) efforts. BSA/AML requirements for institutions that provide money transfer services include, among other things, collecting sender identification for each transfer in the amount of $3,000 or more. Banks are also required to implement a customer identification program, under which they establish procedures specifying what identifying information they will collect when customers open accounts. However, other providers noted that collecting identification is not the same as verifying legal status. For example, several providers accepted the Matrícula Consular de Alta Seguridad, which is an official identity card that Mexican Consulates issue to nationals living outside Mexico. As previously discussed, under the Remittance Status Verification Act this card would not be an acceptable form of identification for proving legal status within the United States for purposes of the act. One provider we spoke with explained that not all states require proof of legal status before issuing a driver’s license or other form of identification. Forms of identification that demonstrate legal status may vary from state to state. It could be difficult for money transfer clerks to know what form of identification to collect, particularly when remitters may hold identification from other states. Some providers and one trade association also noted that the proposed legislation would require additional staff training. For example, one provider said that the company operated through many retail outlets, such as grocery stores and gas stations, and it would not be practical to train all store clerks to determine the appropriate form of identification to show legal residency status. Another provider stated that it would be a significant challenge to train all agents—retail outlets that conduct transactions for the provider—on the documentation they would be responsible for collecting for proof of legal status. A trade association noted the difficulty and potential expense of training staff on how to properly check for proof of legal status; calculate, disclose, and collect the fine; and put the transaction in a database. How much of the fine and added cost would be absorbed by the provider or retail outlet partly depends upon the competitiveness of the market. Remittance transfer providers stated that in competitive markets with a number of providers and a variety of methods for transmitting money, the demand for remittances is more sensitive to prices. For example, one provider indicated that it lost customers when its prices were only marginally higher than those charged by other providers. With the prospect of losing more customers and revenue, one provider with whom we spoke stated that it might choose to absorb some of the fine and added cost instead of passing it on. One provider we spoke with expected that the added costs would increase the costs passed on to consumers by 3 to 4 percentage points. If these costs were passed on in such a manner, all consumers, regardless of legal status, could experience an increase in the price of remittance transfers that are sent to a foreign country. In addition, certain providers might be disproportionately affected by the requirements of the proposed legislation. According to representatives from two providers and a research expert, smaller providers generally operate at lower profit margins compared with larger providers. Providers with lower margins would find it more difficult to absorb costs imposed due to the fine and may be more adversely affected with a reduction in revenues. BEA’s estimate of remittances from the United States totaled approximately $40 billion in 2014, and its estimates of remittances generally increased from 2006 to 2014. BEA changed its remittance estimation methodology in 2012 in order to incorporate new data on reported remittances. However, BEA’s methodology for estimating remittances is not consistent with government-wide policies and guidance on statistical practices or with BEA’s own best practices and thus produces unreliable estimates. For example, BEA did not follow the guidelines from the National Research Council (NRC) of the National Academies stating that data releases from a statistical program should be accompanied by appropriate methods for analysis that take account of variability and other sources of error in the data. In addition, we identified several errors in BEA’s analysis that led us to question the reliability of BEA’s estimates, including data that are censored, measurement and coding errors, and an estimation methodology that is subject to biases. Further, BEA calibrated its new model to match the estimates from BEA’s old model, whose accuracy we questioned in a March 2006 report on remittance estimates. On the basis of discussions with BEA officials, BEA’s failure to follow best practices appears to be due to the fact that the agency does not consider its remittance estimates to be “influential information” that is subject to a high degree of transparency. However, BEA’s estimate is cited by national and international organizations and in some cases is incorporated into the estimates of these organizations, including the World Bank. BEA’s estimate of remittances from the United States (which it reports as personal transfers) totaled approximately $40 billion in 2014. As figure 3 shows, BEA’s estimates of remittances generally increased from 2006 to 2014. BEA’s estimates of remittances from the United States are based on demographic and household survey data and a model that calculates the remittance rates by demographic group. BEA assumes that the foreign- born population represents the relevant population of remittance senders in the United States, because this population is most likely to have a personal link to foreign residents. The estimates of personal transfers include all current transfers from resident to nonresident households, regardless of the means of transfer. BEA changed its model for estimating remittances in 2012 by using new demographic variables and data on reported remittances from the August 2008 migration supplement to the Current Population Survey (CPS) conducted by Census. For its revised model, BEA employed a multiplicative model—that is, a model whose results are the product of the combined effects produced by the individual variables. It used a nonstandard iterative technique to estimate the remittance rates. These rates show the proportion of income that is remitted. To obtain total remittances, the remittance rates for different demographic categories can be multiplied by the number of individuals in those categories and their incomes. In its new methodology, BEA combined the new remittance rates from its revised model with ACS data on foreign-born residents and their income to estimate total remittances sent annually from the United States (see fig. 4). The availability of nationally representative data on remittances in the CPS with actual reported numbers on remittances provided BEA with an opportunity to revise the model it created in 2005. BEA first tested its previous demographic variables against CPS data and found that its assumptions about family structure and time in the United States were weak indicators of how much people reported to remit in CPS data and that its previous country tiers did not match remitting behavior very well. Therefore, in 2012 BEA changed the new model by, removing U.S. citizens born abroad of American parents, assuming that this group’s remittance behavior would be similar to the behavior of U.S. born who were not included in their study; replacing the “children/no children” category with “married, spouse absent/other marital,” because those in the latter category were more likely to send remittances to spouses abroad and were thus a better predictor of remittances; adding the category “living with roommates/other living arrangements,” assuming that people shared housing to save money and therefore could send more remittances; combining immigrants who had been in the United States for 16 to 30 years with those who had been in the country for longer than 30 years into one category, “15 plus years,” as they found in CPS data that these two categories had similar remittance rates; and reallocating countries within pre-existing geographical tiers as BEA found that their previous country allocations were not the best match for the CPS data. We found several issues with BEA’s methodology that resulted in unreliable remittance estimates. BEA also did not follow its own best practices and Office of Management and Budget (OMB) or NRC guidance on documentation and methods for analysis that could have ensured reliability of its methodology and limited the inaccuracy in its estimates. Despite OMB and agency guidance and best practices that would provide that BEA should document its procedures for developing its new model for estimating remittances, BEA did not prepare adequate, transparent documentation of its efforts to develop its new model. BEA also did not prepare adequate documentation of management review and approval of the new model. OMB’s Information Quality Act (IQA) guidelines, which are designed so that agencies will meet basic information quality standards, state that agencies should ensure that data and methods are documented and transparent enough to allow an independent reanalysis by a qualified member of the public. IQA guidelines also direct agencies to develop management procedures for reviewing and substantiating the quality of information before it is disseminated. According to BEA best practice guidance, all changes in either methodology or data sources should receive documented management approval. In its own internal guidelines, BEA notes that it strives for the highest level of transparency about data and methods for its estimates to support the development of high-quality data and facilitate efforts to reproduce the information. Additionally, BEA best practices guidelines are designed to ensure the accuracy of input data; provide high-quality, timely analyses that document how estimates are made; and provide estimates that satisfy both internal and external customer needs. One BEA best practice is to enhance both transparency and replicability by instructing BEA staff to document each step or change in the methodology and document the rationale behind each decision. Another BEA best practice states that written analyses of the estimate should include a discussion of changes and revisions as well as deviations from standard methods. However, based on our analysis, BEA did not follow these guidelines, as the following examples illustrate. Documentation showing how the final remittance estimate is calculated was not maintained. When asked to provide records of analysis that supported the calculations of 2012 and 2013 remittance estimates (the most recent estimates available at the time of our review), BEA staff told us that the documents were created only when each year’s estimate was produced and were not saved. Unable to produce its original documents, BEA recreated the documentation to fulfill our request. However, BEA staff told us that the file could be missing some information required to successfully run the computer program that calculates total remittance estimates; for example, certain variables had been renamed and some fields were missing; their numbers were also multiplied by an arbitrary discount factor whose use was explained to us only later as something done to avoid a break in the series. Changes and revisions were not sufficiently documented. When asked to provide documentation of the analyses completed to determine changes in the model, BEA provided a conference paper containing written descriptions of its regression analyses. BEA staff who completed these analyses told us that the regression files had not been saved in a way that would allow the staff to easily provide us with the files applicable to the model changes. The staff described saving them among many partially complete files and told us that it would be difficult to identify the files that led to the current version of the model. Unable to provide its original research, BEA attempted to recreate the steps that were used to create the model. Management review of estimation methodology was insufficiently documented. BEA officials noted that staff adhered to internal guidance by obtaining both managerial and external reviews of the model’s revision but provided little documentation of them. BEA staff said the remittance model proposal was presented to the Modernization and Enhancement Steering Committee (MESC) for formal review. BEA provided minutes of the MESC meeting discussing the review of the model, but the minutes also indicated that BEA management was still considering changes. BEA staff could not provide documentation of additional management actions taken or of another MESC meeting held at a later date. BEA staff told us that the agency subjected the output of research that affected methodology changes to a full gamut of validity checks. However, the only documentation we received of a validity test was the MESC meeting minutes that contained a discussion of the model’s assumptions. BEA staff told us that the personal transfers model had been subjected to additional scrutiny by BEA senior management resulting from the authors’ conference presentations. However, BEA did not provide us with either documentation of the conference feedback or the results of senior management’s additional scrutiny. BEA officials stated that the decision to publish a Survey of Current Business article about the model’s revision constituted verification of management review. However, BEA could not provide any documentation of the approval process for publication to demonstrate what the management review entailed. The rationale and appropriateness of its methodology for estimating remittances was not documented. According to NRC guidelines for federal statistical agencies data releases from a statistical program should be accompanied by assumptions used for data collection, and what is known about the quality and relevance of the data. The guidelines also mention appropriate methods for analysis that take account of variability and other sources of error and the results of research on the methods and data. We found BEA did not follow these guidelines, as illustrated by the following examples, Data. We were unable to verify the accuracy of the data because we were not provided with documents detailing the steps and analyses BEA undertook to convert CPS data to the dataset BEA actually used to estimate the model. A BEA best practice states that an analysis of the estimate should include a discussion of questionable aspects of the source data, including outliers. However, BEA could not provide us with documents showing analyses performed to deal with various problematic aspects of the data and treatment of outliers. In addition, BEA conducts an analysis to assign a portion of the household’s income to each individual in the household. The income amount attributed to each individual is a critical component of the model and has a substantial effect on the result, yet BEA could not provide any documents showing sensitivity analyses of this critical assumption to see how the attribution of income affects its results. Further, for some households that did not have any family income in CPS data, BEA assigned them incomes but could not tell us how they calculated them. BEA also assigned all households within a given range of family income the same income. This approach introduces measurement error to the extent that households within a given range of family income do not, in fact, have the same income. BEA could not provide any documentation explaining these details or what implications the assigned incomes could have on its results. Estimation Technique. As described earlier, BEA used a nonstandard iterative technique to estimate its model. BEA staff acknowledged that the method was unusual and may be hard to comprehend. When we requested additional information on management review of the model, BEA staff stated that they had had the model reviewed by an outside expert. However, officials later said the review was informal and no written opinion was provided. Model specification. BEA did not follow IMF’s guide for compilers in two instances related to issues of model specification. First, the guide specifies that the variables used to explain and predict remittance rates may need to be converted to different forms to see if they generate a better model. BEA could provide no documentation showing it attempted to do this analysis. Second, the IMF guide states that statistical analyses are also needed to understand the relationship of different demographic variables to each other and to remittance rates in order to select the relevant variables. BEA could not provide us with any documentation showing they performed any tests on the relationship among different demographic variables. Goodness-of-fit. This term refers to how well a model represents the data. The IMF guide states that various statistics describing goodness-of-fit should be calculated to decide on the best model for determining the level of remittances. BEA presented the results of only one such test, namely the “R-squared (R2).” Moreover, BEA does not report standard errors for their models’ coefficients using their iterative method and does not document that this iterative method would produce correct standard errors for these coefficients. test is a measure of the how well the proposed model fits the data. R suggests that the model is a poor fit, while a high Rof 6.75 percent. BEA mentions an Rof 15.8 percent. BEA got this number when it ran the model again for us using a different dependent variable. Statistical Policy Directive No. 1 states that where appropriate any known or potential data limitations or sources of error should be described to data users so they can evaluate the suitability of the data for a particular purpose. But BEA does not mention this aspect of the data in its publication even though it has significant influence on the results. BEA’s model was not consistent with the data. BEA’s model assumption that all individuals within a demographic category remit on average the same percentage of their income is inconsistent with its data, which show that 75 percent of households remit nothing at all. Moreover, BEA’s model generates remittance rates for certain categories of households that have no individuals in them. For example, the model calculates that individuals in households with married persons with absent spouses who have roommates, who remit to low-tier countries, and who have spent 6 to 15 years in the United States, remit 8.5 percent of their income. However, there are no such individuals in the data. BEA failed to point out these data deficiencies even though OMB’s Statistical Policy Directive No.4 asks agencies to clearly point out limitations of the data to users. Failure to account for censored data leads to biased results. Because the value of reported remittances is only partially known, BEA’s remittance data are censored data. The remittance data are censored (at the bottom) because 75 percent of households remit $0 and all other households remit positive sums. The remittance data are censored (at the top) because, as noted earlier, the CPS assigns all households that remit over $10,000 a remittance value of $27,199. Estimating a model on censored data demands certain econometric techniques, which BEA has not adopted, in order to yield unbiased estimates. The guidelines presented by NRC as mentioned above specifically ask agencies to use appropriate methods for analysis that take account of variability and other sources of error. BEA’s model is incorrectly specified in its documentation, and the actual model specification may lead to biases. BEA’s documentation states that its model explains the total amount remitted by a household (in part) in terms of that household’s income level. However, BEA’s model assumes that the total amount remitted by a household depends only on the income of the foreign-born individuals in the household. To the extent that U.S.-born individuals in a household do remit, BEA’s model overestimates the fraction of income remitted by foreign-born individuals in that household. BEA officials said that they excluded U.S.-born individuals from a household on the basis that they remitted very little. But we found that 407 households with only U.S.-born individuals reported remitting almost 13 percent of total remittances, suggesting that the remittance rates of foreign-born individuals may be overestimated and, thus, biased. Measurement errors in a critical explanatory variable bias the results. As explained earlier, BEA’s assignment of household income to individuals within the household is critical to its analysis. Since this individual income variable is subject to measurement error, it biases the effect of this variable on the remittance rates, contributing to the unreliability of the remittance estimates calculated by BEA. Several coding and other errors also contributed to inaccuracy. BEA staff said that they considered the estimation of the personal transfers model a relatively straightforward task. As such, they did not consider independent programming of code by a reviewer necessary. We found several errors and unexplained adjustments in BEA’s code that might have been detected had a review been conducted. Calibration of this new model to match unreliable old estimates enhanced unreliability. BEA’s model predicts total remittance amounts that are substantially lower than those BEA has historically published. BEA handles this difference by multiplying the remittance rates from its model by an arbitrary calibration factor so that the total model’s estimated remittances equal those that BEA previously calculated for 2008. BEA calibrated the model because an analysis of CPS data determined that remittance rates may have been underestimated because many immigrants were reluctant to report their precise remittances. Because BEA calibrated the new model to old estimates, BEA estimated the same remittance amount for the year 2008 that the old model had produced. For the years following 2008, the remittance estimates differed slightly from the previous estimates because of the different demographic characteristics used in the new model (see table 2). In our March 2006 report on BEA remittance estimates, we questioned the accuracy of BEA estimates based on the model developed in 2005 after finding that the remittance rates BEA used were primarily based on its own judgment. We found shortcomings in BEA’s model, specifically with regard to the assumptions BEA made about the percentage of income remitted and the percentage of foreign-born persons who remit. We were unable to link the parameters that BEA used to capture the remitting behavior of foreign-born persons directly to the sources that BEA cited. We found that BEA used its own judgment to determine the proportion of the adult foreign-born population that sent remittances and the proportion of income they remitted. We concluded that the accuracy of these estimates was affected both by the quality of the underlying data as well as by these assumptions. Therefore, calibration of the new model—which may itself be unreliable—to the old estimates further affects the reliability of the final estimates. BEA officials told us that the personal transfers estimate was not a principal economic indicator. Therefore, BEA considered information related to the development of the estimate to be influential (as defined by OMB’s IQA guidelines) only in terms of the integrity of the estimate’s dissemination. Nonetheless, BEA’s Information Quality Guidelines state that at BEA the notion of data integrity goes beyond the maintaining of the security of its information. Integrity includes, among other things, transparency that is ensured by providing certain information, such as assumptions for missing source data and discussions of revisions. BEA officials also noted that personal remittances were a relatively small component of the U.S. current account. According to BEA officials, over the past 5 years personal transfers accounted for an average of 0.59 percent of gross current account transactions. Officials said that, as a result, resources devoted to improving the estimation of personal remittances had to be balanced with resources allocated to improving other estimates that could be more important to the balance of payments. However, a number of organizations use BEA’s estimates. BEA reports its personal transfer estimates to IMF, which publishes country estimates in its Balance of Payments Statistics Yearbook. In addition, the World Bank uses BEA estimates submitted to IMF as part of its calculations on remittances. IDB’s Multilateral Investment Fund also uses estimates published by IMF as a baseline for its calculations of individual country estimates. BEA officials also noted that OMB’s guidelines give agencies discretion in determining the level of quality to which information will be held. However, while the guidelines do afford agencies some discretion, the guidelines make it clear that agencies should not disseminate substantive information that does not meet a basic level of quality. As discussed earlier, by failing to follow its best practices, BEA has not met this basic quality level. BEA officials did not explain the reasons behind not following their own best practices or failing to maintain adequate documentation along the way. We have previously stated that appropriate documentation of a significant event or internal control, in a manner that allows it to be readily available for examination, is an example of a control activity that can be taken by federal program management. This type of control activity allows management to achieve objectives and respond to risks in its internal control system. Such events would include supervisory review of methodological changes to BEA’s estimation model. Moreover, BEA’s best practices require documentation of its methodology and data and supervisory and management review and approval of any changes. But BEA has not provided sufficient and transparent documentation of its procedures for developing its new personal remittance estimation model. The lack of documentation made our evaluation of BEA’s model and estimates difficult, and it was not possible for us to obtain reasonable assurance that BEA met federal guidelines and its own internal standards. Because the documentation provided to us by BEA is lacking in both clarity and completeness, we cannot say that BEA has met the goal of IQA to ensure and maximize the quality, objectivity, utility, and integrity of its remittance statistics, which are public information disseminated by federal agencies. However, based on the information we were able to obtain, we were still able to determine that the model produces unreliable annual estimates. BEA’s updated model for estimating remittances produces unreliable results due to underlying issues with the data, such as missing information and measurement problems. BEA did not satisfactorily explain why its methodology was appropriate, despite NRC’s guidance to do so. Moreover, BEA calibrated the new estimates to align with those from its old model, the accuracy of which we had previously called into question. Additionally, BEA could not provide us with sufficient documentation of the steps it took to test the model and ensure it received management review and approval—key quality assurance procedures. Documentation of BEA’s processes of analyzing, testing, and reviewing its model should not be simply an act of memorializing events. Documentation also provides evidence of an agency’s adherence to procedures and policies that are part of its quality assurance framework. BEA’s methodology for estimating remittances is not consistent with guidelines prescribed by BEA’s best practices standards, the standards of IQA, OMB statistical directives, and NRC guidance. Had BEA subjected its model to these standards, it would have taken important steps toward obtaining reasonable assurance that it had produced reliable annual estimates of remittances. Although BEA officials discount the importance of remittances as a component of international transactions statistics, the inability of BEA’s new model to produce more accurate remittances estimates is consequential, as BEA’s estimate is the official remittance estimate of the United States and is cited by both national and international organizations, and in some cases incorporated into the estimates of these organizations. We recommend that the Secretary of Commerce direct the BEA Director to take the following actions: To improve the reliability of the annual official U.S. estimate of remittances, conduct additional analyses of BEA’s estimates using estimation techniques appropriate for dealing with the shortcomings of the data. Analyses should also be conducted to understand the effect of various assumptions behind and limitations of the data on the estimates. To improve the transparency and quality of BEA’s international remittances estimate, follow established BEA best practices, OMB policies, and NRC guidance for documenting BEA’s methods and analyses used to revise its model for estimating remittances and for producing its annual estimates. We provided a draft of this report to the Secretaries of Commerce, Homeland Security, State and the Treasury, the Chair of the Board of Governors of the Federal Reserve System, and the Director of Consumer Financial Protection Bureau (CFPB). Commerce provided a letter, including written comments the Bureau of Economic Analysis (BEA) on a draft of the report, which are reprinted in Appendix II. CFPB, Treasury and State provided technical comments, which we incorporated as appropriate. In its comment letter, BEA stated that it intends to implement our two recommendations to the extent possible consistent with resource limitations as it continuously improves its remittance (personal transfer) estimate and other estimates. However, BEA stated that it did not agree with our report’s conclusions that its remittance estimates are unreliable or that its documentation of changes to its estimation model or annual estimates is inadequate. More specifically, BEA commented that it believes that its remittance estimates are valid and reasonable for the purpose for which they are prepared and that the documentation provided to GAO was fully adequate. We recognize BEA’s resource constraints. However, we maintain that our findings related to the reliability of BEA’s remittance estimates and documentation of the methodology to produce such estimates are valid and support the recommendations we made in the report. Regarding our conclusion that BEA’s remittance estimates are unreliable, in its comment letter BEA acknowledged the data limitations that GAO pointed out in the report but did not explain how these may affect its estimates. The limitations described in BEA’s comment letter were not discussed in the documentation provided by BEA. Nor did BEA provide evidence showing that it conducted alternative analyses to conclude that these limitations did not affect the quality of its final estimates. For example, in its comment letter BEA mentions that the calculation of its income variable was problematic but during our review did not present us with analysis to show how sensitive its estimates were to various assumptions about income, including that of taking the midpoint of the range of income provided in its data. Even BEA’s choice of demographic variables included in their analysis depends on how it calculates individual income. BEA acknowledges that its data was censored—where the value of reported remittances for some households in its data set is only partially known—but during our review, it did not provide evidence that it conducted additional analyses using an alternative methodology to see how final estimates might be affected. BEA told us that these households were responsible for a substantial proportion of all remittances and we found that it had considerable influence on BEA’s estimates. Though these and other data limitations described in this report could have substantial impact on the estimates, in its comments BEA dismisses the limitations stating that they would only have marginal effect on the estimates. However, BEA does not present evidence of having tested the magnitude of the effects on the estimates. Moreover, calibrating the estimates resulting from BEA’s revised estimation model to its previous estimates, the accuracy of which was deemed uncertain in a previous GAO report, further undermines our confidence in these estimates. As a result of data limitations, BEA’s choice of methodology in light of those limitations, and other errors and corrections BEA made, we maintain that BEA’s revised estimation model produces unreliable remittance estimates. Regarding our conclusion that BEA did not follow the best practices, policies, and guidance to which it is subject for documenting its methods and analyses, BEA stated that the documentation provided to GAO was fully adequate. We disagree. As discussed in this report, we identified several instances where BEA did not follow best practices, policies, and guidance. For example, we requested files that provided documentation of the analyses BEA conducted to determine changes to its estimation methodology. BEA provided written descriptions of its regression analysis in a conference paper. BEA staff told us that its analysis files had been saved among many partially complete files and that it would be difficult to identify the files that led to the current version of the model. BEA’s best practice standards require that all methodological changes and the rationale for the changes be clearly documented. As we describe in the report, without documentation BEA could not effectively convey and support the rationale and appropriateness of its methodology. We were unable to verify, among other things, the accuracy of much of BEA’s data or fully understand the selection of its methodology. As we stated in the report, documentation of analysis, testing, and evaluations of models should show evidence of adherence to procedures and policies that are part of an effective quality assurance framework. BEA did not provide documentation that reflected such a framework. For example, BEA officials described conducting managerial and external reviews of the model’s revision but provided only the minutes to one management review meeting indicating that the model had been discussed but was still under consideration. Though we requested documentation of final approval of the model by the management committee, BEA told us that it had nothing further to provide. BEA also described an external review of its model revision that was done by an external econometrician for quality assurance purposes. When we asked for documentation of this review, however, BEA told us that it had been informal and that no written opinion had been provided. In addition, BEA stated that our ability to reproduce the agency’s estimates showed that its documentation was adequate. However, we did not attempt to reproduce BEA’s estimates. Rather, we ran the computer program that BEA provided on the data created by BEA to replicate a few intermediate steps in its methodology. By replicating these steps, we found inconsistencies between BEA’s description of the analysis and what was actually done, and other errors. We did not and would not have been able to reproduce the analysis, based on the documentation that BEA provided, that led to the final remittance estimates or even create the dataset used by BEA from its listed sources. BEA noted that it provided us with new summaries to help explain certain aspects of its methodology, but asserted that we conflated this additional effort with an inadequacy of internal control and initial documentation. However, we maintain that in some cases, BEA provided these summaries because it was unable to provide us with original documentation. For example, we asked for records of analysis that supported the calculations of 2012 and 2013 estimates. BEA told us that the documents were created only when each year’s estimate was produced and were not saved. BEA also was unable to provide original documentation of the analysis that led to the current version of the model and attempted to recreate its steps in new documentation. BEA also rejected the statement that it did not follow best practices because it did not consider remittances to be influential. During our review, BEA staff told us that information about BEA’s remittance estimates was designated as influential only to prevent their disclosure before they were officially released. BEA also told us orally and in writing that as personal remittances were a relatively small component of the U.S. current account, resources devoted to improving the estimate of remittances had to be balanced with resources allocated to improving other estimates that could be more important to the balance of payments. Finally, BEA stated its remittance estimate was not designed to measure the potential impact of the WIRE Act (proposed Remittance Status Verification Act of 2015), and it understood that we would use its estimates as a basis for understanding the magnitude of cross-border transfers. BEA’s comment inaccurately described the purpose and scope of our review. As we describe in this report, our review focused on two separate objectives which were to (1) discuss the potential effects of assessing a fine on remitters unable to provide proof of legal U.S. immigration status, and (2) examine BEA’s remittance estimate and the extent to which its revised estimation methodology met government-wide policies and agency best practices. We used information on BEA’s remittance estimates solely to help us answer the report’s second objective. We are sending copies of this report to interested congressional committees and the Secretaries of Commerce, Homeland Security, State, and Treasury, as well as to CFPB and the Federal Reserve Board. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs are listed on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. This report (1) discusses the potential effects of collecting information from and imposing a fine on remitters unable to provide proof of legal U.S. immigration status, and (2) examines the Bureau of Economic Analysis’ (BEA) remittance estimate and the extent to which the revised estimation methodology met government-wide policies and agency best practices. To discuss the potential effects of assessing a fine on remitters unable to provide proof of U.S. immigration status, we summarized estimates of the number of immigrants without legal status from federal agencies and research organizations, including the Department of Homeland Security (DHS), Pew Research Center, and Center for Migration Studies (CMS). Together, DHS, Pew Research, and CMS are primary sources for the estimates of immigrants without legal status in the United States, which we determined by asking experts from each organization above to discuss all other similar estimates. Through interviews with immigration researchers, review of research articles, and comparison of the estimates, which ranged from 11.1 million to 11.4 million immigrants in the United States without legal status in 2012, we determined that the estimates were authoritative and sufficiently reliable for the purposes of this report. We used these sources to identify the size of the potentially affected group of immigrants without legal status. To acquire information on the effects of the proposed requirement to provide proof of legal status or pay a fine, we reviewed relevant academic and industry studies based on a literature search. We reviewed and summarized the literature for factors that could be associated with the proposed legislation, including the number and remitting behavior of immigrants without legal status, changes in remittance flows in response to a price increase, the effect of requiring proof of legal status on remittances, and market competition between remittance providers.. We determined the studies to be reliable for our purposes. To obtain perspectives on the potential effects of imposing a fine on remitters without proof of U.S. legal status, we interviewed researchers with expertise in remittances and immigration to the United States, financial institutions, remittance service providers, two industry trade associations, one state audit association, two community groups with knowledge of remitters’ concerns, and knowledgeable federal and international agencies. We judgmentally selected a cross-section of remittance transfer providers that included five nondepository remittance transfer providers and four depository institutions based on a number of factors, including the volume of remittances and diversity of countries serviced. We spoke with regulators, including the Consumer Financial Protection Bureau (CFPB) and the Financial Crimes Enforcement Network (FinCEN), to obtain their perspectives on compliance with requirements of the proposed Remittance Status Verification Act of 2015, should it become law. We also reviewed laws and regulations relevant to remittance transfer providers. Researchers with expertise in remittance transfers were selected by contacting two recognized experts and asking for referrals. We interviewed the experts recommended and continued to ask for others until the referrals began to repeat with experts we already interviewed. To select community groups, we asked others we interviewed for recommended groups. To highlight the uncertainty associated with the effects of the fine, we constructed a scenario analysis of several factors that may affect net revenue from the fine, which is the amount of fine collected that remains available for border protection after payment of CFPB’s administrative and enforcements costs. We varied hypothetical amounts for the following three factors: dollar amount of remittances sent by immigrants without legal status, the percentage reduction in remittances in response to the fine, and the cost for administration and enforcement. We selected the three factors by analyzing them among other potential factors and we found that these three provided wide variability in net revenue from the fine. Other factors we considered included the volume of total remittances, the percentage transmitted through formal methods, and the percent of remittances sent by immigrants without legal status. Though we conducted a literature search for statistics for each factor in our analysis, any studies found were not generalizable or sufficient for our purposes. The data were limited to remittance flows between specific countries, for example remittances sent between the United States and Mexico, or were not recent. Therefore, the dollar amounts or percentages given to each factor in our scenario analysis are hypothetical and selected only to show the potential variability in net revenue from the fine. To obtain information on BEA’s estimate of remittances (personal transfers) from the United States, we met several times with BEA officials responsible for developing the estimate. They provided us with an estimate of the total volume of remittances from the United States to the rest of the world from 2006 to 2014 that they provided to the IMF for inclusion in balance of payments statistics. In this report, we further assess BEA’s estimation model and find that its results are unreliable. To understand BEA’s revised methodology for estimating remittances (personal transfers) we conducted multiple interviews with BEA staff responsible for developing the estimate. We obtained BEA documentation describing the agency’s approach to estimating remittances, including components of its model, related statistical program files, and its outputs. We reviewed BEA’s presentation and description of the model and checked for consistency with its statistical program files and other calculations. We provided BEA with numerous follow-up questions about the methodology, and BEA provided us with written responses and attended additional meetings to provide more clarity. We also obtained documentation on the Census Bureau’s (Census) American Community Survey and Current Population Survey data to understand how they were used in BEA’s remittance estimation methodology and interviewed Census officials familiar with the survey. We also reviewed BEA’s best practices, Office of Management and Budget (OMB) statistical directives, and the National Research Council (NRC) of the National Academies of Sciences’ manual for statistical agencies to determine the extent to which BEA’s methodological changes conformed with guidance on statistical practices. To determine the extent to which BEA documented its changed methodology and its results and adhered to best practice standards, we met with BEA staff responsible for developing the estimate. BEA staff explained their documentation procedures to us. BEA staff also provided copies of BEA guidance on best practices regarding methodological changes. We also reviewed relevant law and regulations, as well as guidance from IMF, the Department of Commerce, OMB, and NRC. We reviewed documents provided by BEA for transparency and completeness. Additionally, we provided BEA with follow-up questions about the agency’s documentation processes and procedures, and BEA provided us with written responses. After receiving the responses, we again met with BEA staff to discuss these processes and procedures. To obtain a variety of views on remittance estimation, we met with officials from IMF, World Bank, Inter-American Development Bank and their external consultant, as well as the Mexican and Philippine central banks. We selected these two countries because they were among the top 10 recipient countries of U.S. annual outflows and both countries use a formal methodology to track inflows and outflows on at least an annual basis. In meetings with these entities, we gained an understanding of the methodologies used to estimate remittances and challenges in remittance estimation. We conducted this performance audit from October 2014 to February 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Marshall Hamlett (Assistant Director); Julie Trinder-Clements (Analyst-in-Charge); Namita Bhatia- Sabharwal; Tarik Carter; Emily Chalmers; David Dornisch; Lawrance Evans, Jr.; Donald Hirasuna; Cheryl Jones; Madeline Messick; Patricia Moye; Jungjin Park; Oliver Richard; and Jena Sinkfield made key contributions to this report.
For many countries, remittances represent a large and stable source of foreign currency. Remittances have received increasing attention from policymakers as the volume of funds transferred has increased over the years. Despite the global significance of remittances, much remains unknown about the actual volume of remittances and the methods used to remit them. GAO was asked to study the potential effects of a fine on certain remitters and estimates of U.S. remittances. GAO examined (1) the potential effects of a fine on remitters unable to provide proof of legal immigration status, and (2) BEA's remittance estimate and the extent to which its revised estimation methodology met government-wide policies and best practices. GAO constructed a hypothetical scenario analysis to show the uncertainty associated with the effects of a fine. GAO interviewed, among others, BEA, International Monetary Fund and World Bank officials, and researchers. GAO also analyzed BEA's estimate of U.S. remittances and documentation of its methodologies. The Remittance Status Verification Act of 2015, S. 79, would require remittance transfer providers to request that all senders of remittances to recipients outside the United States provide proof of their legal status under U.S. immigration laws and impose a fine on those unable to provide such proof. The funds collected would be submitted to the Consumer Financial Protection Bureau (CFPB) to pay for its administrative and enforcement costs in carrying out the act, and any remaining funds would be used to pay expenses related to border protection. The fine may raise money for border protection, but the exact amount is unknown and would depend on several factors, including the dollar amount of remittances sent by those without legal status, changes in remitter behavior due to the fine, such as using unregulated transfer methods, and CFPB's administrative and enforcement costs to carry out the act. The first two factors above affect the volume of remittances that would be subject to a fine. The third factor affects the amount of net revenue from the fine remaining for border protection. Finally, remittance transfer providers told GAO that the fine could have consequences for them, including potentially disproportionate costs for small providers. The Bureau of Economic Analysis (BEA) estimated that remittances from the United States were approximately $40 billion in 2014. However, BEA's methodology for estimating remittances is not consistent with government-wide policies and guidance on statistical practices or with BEA's own best practices and thus produces unreliable estimates. GAO identified several weaknesses in BEA's estimation methodology, illustrated by the following examples. BEA failed to use appropriate methodology that addressed questionable aspects of the data, such as missing information and measurement problems. This is inconsistent with National Research Council of the National Academies of Science guidelines for federal statistical agencies and government-wide policies. BEA also calibrated the output of the new model to match the estimate produced by BEA's previous model. BEA did this because according to officials the new model produced substantially lower results than BEA had previously estimated. In a 2006 report GAO had questioned the reliability of BEA's previous model; as a result BEA's actions raise further concerns about the reliability of the new model's results. Moreover, BEA could not provide adequate, transparent documentation underlying its methodology or reviews of its methods and data. According to BEA officials, BEA did not adhere to its own best practices for changing its methodology because they did not consider the remittance estimate to be influential information. However, BEA's estimate is influential, as it is cited by national and international organizations and in some cases is incorporated into the estimates of these organizations, including the World Bank. GAO recommends that BEA conduct analyses to improve the reliability of its estimate and follow established policies for documenting its methods and analyses. BEA agreed to implement the recommendations but disagreed that its estimates are unreliable and not adequately documented. GAO disagrees and maintains that BEA's revised estimation model produces unreliable estimates and BEA could not provide adequate documentation of its methodology.
U. S. Attorneys serve as the nation’s principal litigators under the direction of the Attorney General. U.S. Attorneys conduct most of the trial work in which the United States is a party. Under Title 28 U.S.C. 547, U.S. Attorneys have three statutory responsibilities: prosecute criminal cases brought by the federal government prosecute and defend civil cases in which the United States is a party, collect debts owed the federal government that are administratively uncollectible. EOUSA was established to provide a liaison between DOJ in Washington, D.C., and the 93 U.S. Attorneys. EOUSA provides each U.S. Attorney and the 94 U.S. Attorneys Offices general executive assistance and direction, policy development, administrative management and oversight, operational support, and coordination with other components of DOJ and other federal agencies. In fiscal year 2002, U.S. Attorneys’, and EOUSA’s budgets were about $1.5 billion and $64.6 million, respectively. OJP, the grant making arm of DOJ, provides grants to various organizations, including state and local governments, universities, and private foundations, that are intended to develop the nation’s capacity to prevent and control crime, administer justice, and assist crime victims. OJP’s Assistant Attorney General is responsible for overall management and oversight of OJP through setting policy and for ensuring that OJP policies and programs reflect the priorities of the President, the Attorney General, and the Congress. The Assistant Attorney General promotes coordination among the various bureaus and offices within OJP. Staff of the bureaus and program offices develop OJP grant programs, accept and review applications, make grant awards, and manage and monitor grantees until the award is closed out. In fiscal year 2002, OJP’s budget was about $4.3 billion. According to OJP and EOUSA officials, U.S. Attorneys and their staff currently are involved in two DOJ programs involving OJP grants—PSN and Weed and Seed. As mentioned earlier, BJA is responsible for national administration and management of grants awarded under the PSN initiative. PSN, which was initiated in fiscal year 2001 by the President and the Attorney General, was designed to commit more than $900 million over a 3-year period to hire new federal and state prosecutors, support investigators, provide training, and develop and promote community outreach efforts all with the goal of focusing community attention and energy on reducing gun violence. Under the program, U.S. Attorneys were to take the lead in mobilizing federal, state, and local officials in their districts by establishing PSN task forces to develop comprehensive gun violence reduction strategies or review and enhance existing strategies. PSN task forces are to implement these strategies, in part, through the use of various OJP grants awarded in each U.S. Attorney’s district. These OJP grants are the (1) Research Partner/Crime Analyst Grants to support the strategic planning and accountability portion of PSN, (2) Media Outreach and Community Engagement Grants to help task forces in their community outreach initiatives, (3) Project Sentry Grants to help task forces address local juvenile related gun crimes, and (4) Open Solicitation Grants to support comprehensive and innovative approaches to reduce gun violence in local communities. EOWS is responsible for providing national leadership as well as management and administration of the Weed and Seed Program, which in fiscal year 2002 had a budget of about $59 million. Under the program, U.S. Attorneys are to serve as both the main contact to Weed and Seed sites for EOWS and as facilitator of the program’s community based coordination efforts. Accordingly, U.S. Attorneys are to work with local stakeholders to develop and implement a community based, multiagency strategy that proposes to “weed out” crime from targeted neighborhoods, then “seed” the site with a variety of programs and resources to prevent crime from recurring. In fiscal year 2002, there were about 229 Weed and Seed sites and the average grant awarded per site was about $200,000. Guidelines first established by the Attorney General in 1994 stated that U.S. Attorneys and their staff may be involved in their community’s crime prevention and control efforts—including efforts to secure DOJ grant funds and work with grantees—as long as they subscribe to legal and ethical considerations. DOJ components have recently issued related guidelines for U.S. Attorneys and their staff that, among other things, focuses specifically on their dealings with grant applicants and grantees under the PSN and Weed and Seed Programs. According to EOUSA officials, DOJ issued program specific guidelines in response to the numerous questions by U.S. Attorneys and their staff concerning their role in relation to PSN and Weed and Seed. U.S. Attorneys are encouraged to be involved in community based activities that seek and secure DOJ grant funds as long as they and their staff subscribe to legal and ethical considerations commensurate with being a government employee, an attorney, and U.S. Attorney. According to guidelines established by the Attorney General in 1994 and revised in January 2001, U.S. Attorneys are encouraged to engage in community based crime prevention and control activities and form coalitions with nonfederal, community based organizations, private entities, and law enforcement because “promoting crime prevention initiatives enhances the presence of the Department of Justice in communities around the country and has proven effective in reducing crime.” The guidelines state that, when working with nonfederal entities in implementing crime prevention initiatives, U.S. Attorneys and their staff are to remain impartial in carrying out their official duties and be careful to avoid the appearance of partiality; consider conflicts of interest statutes when crime prevention activities involve persons or organizations with whom they have a personal, financial, or business relationship; and avoid participation in coalitions that include individuals and nonfederal organizations that may be victims, witnesses, subjects, or targets in matters pending in their districts. Thus, under the Attorney General’s guidelines, U.S. Attorneys may convene meetings with other potential coalition participants to discuss operating needs, program initiatives, event planning, and other related matters, but they are to avoid participating in budget decisions of a coalition, including decisions regarding the expenditure of funds that could create the appearance that the U.S. Attorney is managing an entity outside of DOJ. Also, according to the guidelines, U.S. Attorneys may endorse specific coalition-based program initiatives as long as they refrain from endorsing specific organizations; give presentations about coalition initiatives at fund-raising events as long as the presentation addresses official DOJ issues and does not solicit contributions; and participate in public service announcements with other coalition members when the purpose of the announcement is to further DOJ’s mission and coalition initiatives. With regard to grants, the guidelines state that U.S. Attorneys may provide potential grant applicants with public information regarding sources of federal funding and respond to inquiries regarding the grant application process. Furthermore, they may draft a letter of recommendation to OJP supporting a grant application. According to the guidelines, this letter can identify the applicant’s accomplishments and may express the U.S. Attorney’s views on whether government program funds should or should not be granted to a particular applicant. However, U.S. Attorneys’ names are not to appear on grant applications unless required by law, and U.S. Attorneys are not to otherwise contact federal agencies on behalf of an applicant seeking federal grant monies. DOJ components involved in the PSN and Weed and Seed Programs have taken steps to provide specific guidance to U.S. Attorneys and their offices in carrying out their grant-related responsibilities. In May 2002, EOUSA told U.S. Attorneys and their staff that BJA had published Web based guidelines for U.S. Attorneys Offices and PSN task forces to instruct them about their role in the process to solicit, review, and select grant proposals. According to the memorandum issued by EOUSA’s Director, the guidance was designed to provide step-by-step instructions on the grant process that included guidance about specific ethics issues. In December 2002, EOUSA told U.S. Attorneys and their staff about new PSN guidelines—again including guidance about ethics issues—designed to cover grants to be awarded in fiscal year 2003. During the same month, the EOUSA Director sent a memorandum to all U.S. Attorneys, their senior staff, and Law Enforcement Coordinating Committee (LECC) Coordinators about U.S. Attorneys Offices’ responsibilities in implementing the Weed and Seed Program, including how to deal with ethics concerns related to Weed and Seed grant activities. Appendix I provides greater detail on the guidelines DOJ components issued for U.S. Attorneys Offices on PSN and Weed and Seed during calendar year 2002. According to EOUSA officials, the decision to issue guidelines for each program resulted from DOJ’s overall effort to develop the PSN Program. EOUSA’s Deputy Legal Counsel in EOUSA’s Office of Legal Counsel said that the PSN guidance was not prompted by any particular incident; rather, it was developed in response to numerous questions about PSN- related ethics issues from U.S. Attorneys and their staff as the program was being developed. The Deputy Legal Counsel said the exercise, combined with similar questions by U.S. Attorneys and their staff subsequently prompted EOUSA to develop the December 2002 guidance for the Weed and Seed Program. EOUSA’s Deputy Legal Counsel also said that EOUSA has provided ethics training to U.S. Attorneys and their staff on their roles and responsibilities as it relates to grants offered and awarded under both programs. In January 2002, EOUSA provided a presentation to U.S. Attorneys at the first national PSN conference and in April 2002, EOUSA provided the same presentation for each districts’ LECC Coordinators at a similar conference. The presentation included a discussion of what U.S. Attorneys and their staff can and cannot do when participating in the grant process. The Deputy Legal Counsel said that ethics training pertinent to the Weed and Seed Program was also provided to LECCs during October 2002. Also, in December 2002, EOUSA produced and disseminated a video that discussed the process U.S. Attorneys are to follow when working with PSN task forces during the grantee selection and application process. EOUSA’s Legal Counsel and Deputy Legal Counsel also indicated that they believe that training, available guidance on ethics issues, and staff awareness about standards of conflicts and actual or apparent conflicts of interest are sufficient to ensure that ethical lapses will not occur. They said that they were unaware of any ethical lapses and said that if questions were raised, DOJ’s Office of Inspector General (OIG) would investigate them. OIG staff we contacted who were responsible for dealing with ethical issues at DOJ said they were aware of only one complaint involving a U.S. Attorney and the Weed and Seed Program and none regarding PSN. An effective internal control process is one that provides management with a reasonable level of assurance that agency operating, financial, and compliance objectives are being achieved on a systematic basis. EOUSA has an evaluation program to assess and oversee the overall operations of each U.S. Attorneys Office—including operations associated with the management of the PSN and Weed and Seed Programs—but the evaluations are not designed to assess compliance with the PSN and Weed and Seed guidelines recently issued. Similarly, federal regulations and procedures call for systematic financial disclosure reporting to, among other things, facilitate the review of possible conflicts of interest to guarantee the efficient and honest operation of the government. However, DOJ has not established a financial disclosure reporting mechanism for certain individuals—employees of U.S. Attorneys Offices that work with grantees and potential grantees and nonfederal appointees to PSN grant selection committees—to provide management assurance that these individuals are free from actual or apparent conflicts of interest. According to the Comptroller General’s Standards for Internal Control in the Federal Government, internal control activities are the policies, procedures, techniques, and mechanisms that enforce management’s directives. They include, for example, steps to set the specific standards or criteria to be achieved by staff as well as steps that provide management the information to determine on a routine basis whether the standards are being met and to take corrective action when they are not. EOUSA has an evaluation program to assess and oversee the overall operations of each U.S. Attorneys Office that includes an assessment of the office’s involvement in and performance related to the Weed and Seed and PSN Programs. However, the evaluations are not designed to assess compliance with the Weed and Seed and PSN guidance related to ethical concerns that EOUSA recently issued. Under 28 C.F.R. Part 0.22, EOUSA is to evaluate the performance of the U.S. Attorneys Offices, make appropriate reports, and take corrective actions if necessary. EOUSA’s Evaluation and Review Staff (EARS) is responsible for the evaluation program, which, according to EOUSA, is an internal review program designed, among other things, to examine management controls and prevent waste, loss, unauthorized use, or misappropriation in federal programs, as required under the Federal Manager’s Financial Integrity Act. EARS evaluations are conducted in each of the 94 U.S. Attorneys Offices every 3 years by teams of experienced Assistant U.S. Attorneys, and administrative and financial litigation personnel from other U.S. Attorneys Offices. According to EOUSA’s Assistant Director for EARS, these assessments focus on personnel, management, and workload issues in individual U.S. Attorneys Offices and include, among other things, an assessment of the management and operations of the local Weed and Seed and PSN Programs. Our review of EARS guidelines show that when evaluating the management of the PSN and Weed and Seed Programs in U.S. Attorneys Offices, review teams were to focus on task force or committee management issues rather than compliance with the guidelines recently published. For example, the template for the PSN part of the EARS review instructs EARS reviewers to examine, among other things, whether the PSN strategy had been implemented. If so, evaluators were instructed to provide information on a variety of matters, including the names of the PSN coordinators and the litigation units, sections, or branch offices where they serve; the nature of the partnerships that have been developed with federal, state, and local law enforcement and whether the partnerships are districtwide or tailored to meet the individual needs or problems facing branch offices; the community outreach activities associated with PSN; the number of specially allocated attorney and support staff positions allocated to the office and whether they have been filled; and examples of successes achieved under the program. For the Weed and Seed Program, the template instructs review teams to respond to the following five questions: Does the district have a funded Weed and Seed Program? If so, describe the site, its organization, committees, management, programs, and initiatives. Who in the U.S. Attorneys Offices supervises and works with the Weed and Seed Program? What is the U.S. Attorneys’ role in the Weed and Seed Program? What other U.S. Attorneys Offices staff, such as the LECC or Assistant U.S. Attorneys have a role in the Weed and Seed Program? Do you know of any problems or concerns with the Weed and Seed Program? EOUSA’s Assistant Director for EARS said that reviews for both programs were broad based management reviews and were not designed to be audits of the programs. The Assistant Director also said that there are plans to revise the PSN part of EARS to include an evaluation of gun-crime data that is to be reported to the Attorney General twice yearly, but there are no similar plans to revise the Weed and Seed part of EARS. Regarding the recently issued PSN and Weed and Seed guidelines, the Assistant Director said that there are no plans to revise EARS to assess compliance with the guidelines and determine whether they are working as intended. Staff in U.S. Attorneys Offices can be delegated responsibility to lead or work with community organizations that receive Weed and Seed grant funds, but these staff are not required to file disclosure forms. These forms might reveal relationships that could be actual or potential conflicts of interest. According to 5 C.F.R. 2634.904, each officer or employee whose position is classified at GS-15 or below or at a rate of pay that is less than 120 percent of the minimum rate of pay for GS-15, is required to file a confidential financial disclosure report if the agency concludes that the duties and responsibilities of the employee’s position require the employee to: participate personally and substantially through decision or exercise of judgment, in taking a government action regarding contracting or procurement; administering or monitoring grants, subsidies, licenses, or other federal conferred financial or operational benefits; regulating or auditing any nonfederal entity; or other activities in which the final decision or action will have a direct and substantial economic effect on the interests or nonfederal entity or avoid involvement in a real or apparent conflict of interest, and to carry out the purpose behind any statute, executive order, rule, or regulation applicable or administered by that employee. According to 5 C.F.R. 2634.901, these reports are designed to (1) assist an agency in administering its ethics program and counseling its employees and (2) facilitate the review of possible conflicts of interest to guarantee the efficient and honest operation of the government. During our review, we examined the most recent summary of EARS reports dated between June 1997 and April 2000, for the 10 U.S. Attorneys Offices we visited. In some of these districts, U.S. Attorneys participated on the Weed and Seed steering committee, while in others, Assistant U.S. Attorneys or LECC Coordinators were delegated responsibility for working with Weed and Seed committees, and according to one report, “run” the Weed and Seed Program. None of the EARS reports addressed any involvement with the PSN program because when the reviews were completed, PSN had not been implemented. Our work in the 10 districts also showed that 9 of the districts had active Weed and Seed sites in place, and in some districts, new Weed and Seed sites were under consideration. Among the districts that had active Weed and Seed sites, some of the U.S. Attorneys told us that they actively worked with Weed and Seed committees, whereas others delegated responsibility to an Assistant U.S. Attorney or to LECC Coordinators. For example, in one district the LECC Coordinator represented the U.S. Attorney on the Weed and Seed committee, while in another district the LECC Coordinator helped manage the Weed and Seed sites day-to-day operations. Given recent EOUSA, BJA, and EOWS efforts to publish PSN and Weed and Seed guidelines and train U.S. Attorneys and their staff about ethical concerns, we asked if U.S. Attorneys and their staff that deal with potential grant applicants and grantees were required to file financial disclosure statements. They provided information, published on DOJ internal Web pages, which showed that under current DOJ guidelines: U.S. Attorneys, Assistant U.S. Attorneys in supervisory positions, Senior Litigation Counsels, Special Government Employees, and Schedule C employees are required to file a Public Financial Disclosure Report within 30 days of assuming their covered position and annually thereafter. All line Assistant U.S. Attorneys and special Assistant U.S. Attorneys are required to file a Confidential Conflict of Interest Certification Form to certify that they have no conflict of interest in each matter they undertake. Employees occupying positions in which they exercise significant judgment on matters that have an economic effect on the interests of a nonfederal entity are required to file a confidential financial disclosure report within 30 days on entering a covered position and every year by October 31, including positions where duties involve contracting, procurement, administering grants, regulating, or auditing a nonfederal entity or other activities in which the final decision or action will have a direct and substantial economic effect on the interests of any nonfederal entity. EOUSA’s Deputy Legal Counsel also told us that LECC Coordinators and Assistant U.S. Attorneys that work with organizations involving grantees are not required to file confidential disclosure forms because they are not responsible for administering or monitoring grants. The Deputy Legal Counsel pointed out that employees in U.S. Attorneys Offices are not supposed to monitor grants. The Deputy Legal Counsel said that the Weed and Seed guidelines instruct employees to not act on behalf of EOWS; rather, they are to notify EOWS of any issues that may arise during the course of the grant relationship and EOWS is to handle the matter under its own procedures. Nonetheless, the Deputy Legal Counsel acknowledged that U.S. Attorneys Office staff that work with grantees under the Weed and Seed Program might encounter situations that could be perceived as real or apparent conflicts of interest. Furthermore, the Deputy Legal Counsel and EOUSA’s Deputy Director said that, based on our inquiry, it might be worthwhile considering a change to procedures so that LECC Coordinators would be required to file confidential disclosure statements. The Deputy Legal Counsel added that Assistant U.S. Attorneys are already required to file the confidential certification form for each matter they are involved with and was not clear whether involvement in a community Weed and Seed activity related to grants would constitute a matter covered by the certification form. In developing the PSN grant program, BJA modeled the PSN selection committee process after its peer review process, where peer review committees are used to assess the merits of the grant application and make recommendations about worthy grant applications. However, whereas BJA has established a process to screen peer reviewers for actual or apparent conflicts of interest before they are appointed to peer review committees, it has not established a similar process for members of PSN selection committees. According to a BJA project manager, BJA uses a multistep process to screen potential peer reviewers for conflict of interest in reviewing applications for grants. BJA hires a peer review contractor who is responsible for conducting a preliminary screening of potential peer reviewers for conflicts of interest based on guidelines established by BJA. Once past the preliminary screening, peer reviewers are asked to self- identify any conflicts of interest by signing a certification statement. EOUSA’s PSN coordinator told us that BJA has delegated its peer review authority to U.S. Attorneys and, as discussed earlier, BJA has issued guidance that includes the steps the U.S. Attorneys are to follow when appointing members of the selection committee—peer reviewers for PSN grants. BJA’s guidance states that the selection committee can include any or all of the other members of the PSN task force, except the U.S. Attorney, a member of his or her staff, or any federal employee, as long as their participation does not represent an actual or apparent conflict of interest. The guidance further reminds the U.S. Attorneys that the Standards of Conduct and Conflict of Interest Rules that apply to him or her and their staff also apply to members of the selection committee. However, unlike the peer review process employed by BJA for other grant programs, U.S. Attorneys are not required to screen the selection committee members they appoint for actual or apparent conflicts of interests, nor are committee members asked to self-identify any actual or apparent conflicts of interest. Our discussions with BJA and EOUSA officials responsible for PSN indicated that the lack of a mechanism for identifying actual or apparent conflicts of interest among selection committee members was not a problem because they believe (1) appointees from these organizations would likely be covered by their own ethical guidance governing their capacity as a selection committee member and (2) the geographic area covered by individual PSN grants is so small that local jurisdictions would not select someone to serve on the selection committee that has a vested interest in who the grants are awarded to. BJA’s Director of the Programs Division told us that, when BJA developed the guidelines for PSN selection committees, BJA had not thought of including a requirement that selection committee members submit a signed self-disclosure conflict of interest statement. The Director of the Programs Division said that, based on our inquiry it might be useful to include some type of requirement for conflict of interest reporting to add an additional level of assurance about the integrity of the PSN Program. Accordingly, in April 2003, the Director, BJA Programs Division, said that BJA would issue a directive requiring PSN fiscal agents to collect a signed self-certified conflict of interest statement from PSN selection committee members. Fiscal agents would be required to maintain the statements on file subject to BJA review in their capacity as grant monitors. DOJ efforts to provide guidance to U.S. Attorneys Offices regarding their involvement in activities associated with grants awarded under the PSN and Weed and Seed Programs are notable. However, as U.S. Attorneys and their staff become more heavily involved in these grant programs, they could increasingly encounter actual or apparent conflicts of interest that could undermine the integrity of the programs both within districts and nationwide. Without a mechanism for monitoring U.S. Attorneys Offices’ compliance with available guidance, DOJ does not have reasonable assurance that its steps taken to date—such as the issuance of guidance, ethics training, and video presentations—are adequately understood and have reached all those who are covered by this guidance. DOJ components, such as EOUSA and BJA, are also not positioned to determine (1) if the guidelines are correctly applied and actually and systematically achieving the end result of preventing actual or apparent ethical conflicts or (2) whether guidelines related to grant activities could be clarified, strengthened, or improved. In addition, the absence of confidential financial disclosure reporting for U.S. Attorneys Office employees that work with grantees hinders the U.S. Attorneys ability to (1) fully administer these programs in the context of ethics considerations and (2) identify possible conflicts of interest to guarantee the efficient and honest operation of the government. We recommend that the Attorney General instruct the Director of EOUSA and U.S. Attorneys to take steps to further mitigate the risk associated with U.S. Attorneys Offices’ involvement in the grant components of the PSN and Weed and Seed Programs. Specifically, we recommend that EOUSA and U.S. Attorneys (1) establish a mechanism to assess and oversee compliance with recently issued guidelines pertaining to the grant activities of U.S. Attorneys Offices and ensure that the guidelines are working as intended and (2) require that U.S. Attorneys’ staffs who work with community organizations on grant-related matters be required to file financial disclosure reports certifying that they are free from conflicts of interest. On May 13, 2003, we requested comments on a draft of this report from the Attorney General. On May 19, 2003, Department of Justice officials informed us that they had no comments on the report. Copies of this report will be made available to other interested parties. This report will also be available on the GAO Web site at http://www.gao.gov. If you have any questions, please contact my Assistant Director, John F. Mortin, or me at (202) 512-8777. You may also contact Mr. Mortin at [email protected], or me at [email protected]. Key contributors to this report were Daniel R. Garcia, Grace Coleman, and Maria Romero. The following paragraphs summarize the guidelines the Department of Justice (DOJ) issued for U.S. Attorneys and their staff during calendar year 2002 regarding their role in working with grants and grantees awarded under the Project Safe Neighborhoods (PSN) and Weed and Seed Programs. During 2002, DOJ issued two sets of guidelines for U.S. Attorneys and PSN task forces in carrying out their responsibilities under PSN. Under the May 2002 PSN guidelines, each U.S. Attorneys Office was instructed to work with interested federal, state, and local officials to form a PSN task force, chaired or co-chaired by the U.S. Attorney, to develop a comprehensive strategic plan. As part of this process, the task force was to formulate its overall mission and goals after which the U.S. Attorney was instructed to designate a selection committee to (1) review eligible grant proposals and (2) select a single grantee for Research Partner/Crime Analyst and Media Outreach and Community Engagement grants funded in fiscal year 2002. The guidelines stated that the selection committee was not to include members of the U.S. Attorneys’ staff, but could include other members of the task force as long as their participation did not represent an actual or apparent conflict of interest. In addition, the guidelines instructed the U.S. Attorney to certify to the selection committee, based on the recommendations of the task force, whether potential grantees are suitable candidates for federal funding and convey the committee’s choice to the Bureau of Justice Assistance (BJA), along with a letter from the U.S. Attorney, certifying that (1) the potential grant recipient is free from allegations of criminal misconduct and current investigation and (2) the applicant’s proposal supports the PSN task force activities, missions, and goals. In December 2002, the Executive Office for U.S. Attorneys (EOUSA) announced that BJA had issued similar guidelines for reviewing and selecting applicants for grants funded in fiscal year 2003. As before, U.S. Attorneys and their staff were instructed to work with the PSN task force and, among other things, the U.S. Attorney was to designate a selection committee—not comprised of the U.S. Attorneys’ staff or federal employees—to choose a grantee. Unlike the earlier guidelines, the selection committee was to (1) choose a single grantee to act as fiscal agent for the PSN strategy and (2) determine what portions of the PSN strategy should be funded and to whom after the grant proposal had been approved by BJA. BJA’s guidelines for both fiscal years also included hyperlinks to guidance EOUSA had issued for U.S. Attorneys and their staff earlier in the year. EOUSA’s guidelines were similar to the Attorney General’s guidelines, but they focused specifically on numerous ethics and legal issues they need to consider in relation to their involvement with the PSN Program. For example, similar to the Attorney General’s guidelines discussed earlier, U.S. Attorneys are expected to express their views if there is any reason why a particular applicant is an inappropriate candidate for PSN funds, but they are prohibited from appearing before the Office of Justice Programs on behalf of an applicant seeking grant monies associated with PSN. In December 2002, EOUSA also issued guidance that outlined the roles and responsibilities of U.S. Attorney’s and their staff regarding the Weed and Seed Program. Similar to the Attorney General’s and EOUSA’s PSN ethics guidelines, EOUSA’s Weed and Seed guidance covered topics ranging from working with nonprofit organizations to prohibitions against fundraising and listed what activities U.S. Attorneys and their staff can perform in support of the Weed and Seed Program. In regard to grants, the guidance stated that U.S. Attorneys and their staff may, among other things serve as that chair or co-chair of the Weed and Seed Steering certify to the Executive Office for Weed and Seed (EOWS) via a “letter of intent” that a potential Weed and Seed site can receive “official recognition;” that is the site has developed a strategy sufficient to make them eligible to apply for a Weed and Seed grant; review Official Recognition applications and prepare a cover letter for submission to EOWS supporting the site and its strategy; review funding applications to ensure technical accuracy and consistency with the Weed and Seed strategy; sign a statement of support for the Weed and Seed strategy; and supervise the site, as chair or co-chair of the steering committee, throughout the life of the initiative. The Weed and Seed guidelines also instructed U.S. Attorneys that, among other things, they may not become advocates for individual grant applicants; communicate with or appear before any federal agency on behalf of a nonprofit organization; or draft grant proposals or applications. Furthermore, U.S. Attorneys were told that they are authorized to assist EOWS in monitoring the performance of the project under the grant to ensure federal grant dollars are not misused, but they are not to act on EOWS’ behalf. The guidelines stated that U.S. Attorneys are to inform EOWS of site implementation problems or irregularities to enable EOWS to take appropriate action.
Ninety-three U.S. Attorneys serve 94 judicial districts (the same U.S. Attorney serves the District of Guam and the District of the Northern Mariana Islands) under the direction of the Attorney General. Among other things, the Attorney General expects U.S. Attorneys to lead or be involved with the community in preventing and controlling crime including efforts to secure Department of Justice (DOJ) grant funds and work with grantees. This report provides information about the guidance U.S. Attorneys are given in carrying out their responsibilities with regard to DOJ grants. It makes recommendations to assess compliance with guidance and to reduce the potential for conflicts of interest. U.S. Attorneys' grant activities are guided by legal and ethical considerations. General guidelines established by the Attorney General in 1994 and revised in 2001 outline how U.S. Attorneys and their staff can be involved in their community's crime prevention and control efforts, including DOJ grant activities. Last year, DOJ issued guidance in response to U.S. Attorneys' questions about their role in relation to two DOJ grant programs--Project Safe Neighborhoods and Weed and Seed. In addition, through its Executive Office for U.S. Attorneys (EOUSA), DOJ provided training on ethical considerations in dealing with grant applicants and grantees under both grant programs. Although EOUSA has an evaluation program to assess and oversee the overall operations of each U.S. Attorney's Office, the evaluations are not designed to assess whether U.S. Attorneys and their staffs are following the recently established guidelines. Without a mechanism to make this assessment, EOUSA does not have assurance that DOJ guidance is adequately understood, has reached all those who are covered by it, and is correctly applied. In addition, federal regulations and procedures call for systematic financial disclosure reporting to facilitate the review of possible conflicts of interest and ensure the efficient and honest operation of the government. However, while GAO did not identify any incidences of conflicts of interest, certain individuals--staff in U.S. Attorneys Offices that work with grantees and nonfederal members of committees that are appointed by each U.S. Attorney to, among other things, assess the merits of grant proposals--are not required to disclose whether they are free from actual or apparent conflicts of interest. Based on the merits of GAO's work, DOJ officials stated that they would issue a directive to require members of these committees to sign a self-certified conflict of interest statement that is to be held on file subject to DOJ grant monitoring.
The process for developing and issuing RAIs begins with either pre- application activities or the submission of an application (see fig. 1), and is generally consistent across the NRC offices that use RAIs. Pre-application activities occur before NRC receives an application; these activities may include a meeting between the licensee and NRC staff, or communication between parties via phone or e-mail. NRC offices assign each licensee a project manager or license reviewer who is responsible for overseeing the licensing process and coordinating with review staff. Pre-application activities provide an opportunity for licensees to ask clarifying questions of NRC staff and for NRC staff to prepare for the review of an incoming application. Not all license applications or NRC offices include this step, as pre-application activities vary based on the complexity of the application. All licensing actions across NRC offices, however, include the submission of an application. After NRC receives an application, officials may conduct an acceptance review to ensure there is enough information contained in the application to perform a technical review. NRC considers submitted applications “tendered” until the acceptance review is complete. If it is found during acceptance review that the application does not contain sufficient information, the application may remain tendered while the applicant submits supplemental information, or may be denied. The process of developing and issuing RAIs begins after either the submission of the application or acceptance review and culminates in a licensing decision. NRC reviewers have to make a conclusion that what the licensee is proposing provides a reasonable assurance of safety; this allows reviewers to complete a safety evaluation report and conduct both a technical review and a regulatory review. If NRC’s reviewers are able to arrive at a conclusion with the information that the licensee provided in the application, then there is no need for NRC to issue an RAI. However, if there are areas where the information the licensee has submitted appears incomplete, then NRC staff will address these areas by developing and preparing RAIs for management review. After management review, NRC issues RAIs to licensees. Prior to issuance, NRC staff may also send draft RAIs to the licensee and reach out to the licensee via telephone to ensure that the information that NRC needs is understood and that the RAI language is clear. In such cases, NRC would issue the formal RAIs after this outreach. The licensee is then expected to submit responses to the RAIs within a specified period of time, typically within 30 or up to 60 days. The NRC review team may develop and issue additional, follow-up questions to the original RAIs—also known as “additional rounds”—if the review staff requires more information than the licensee’s initial response contained. When the NRC review team has the information needed to ensure a fully informed, technically correct, and legally defensible decision, it will either approve or deny the license application. Each NRC office that issues RAIs has its own guidance, and the Office of Nuclear Reactor Regulation, the Office of New Reactors, and some divisions in the Office of Nuclear Material Safety and Safeguards have efforts underway to update guidance intended to improve oversight of RAIs. Guidance for developing and issuing RAIs is generally the same across individual offices that issue them but also reflects each office’s own specific responsibilities and procedures. NRC offices that issue RAIs each have their own guidance, and some offices have been updating their guidance over the past year in an effort to improve the RAI process. This updated guidance includes an increased focus on ensuring staff compliance with the process through managerial review. The Office of Nuclear Reactor Regulation’s guidance on RAIs is contained in an office instruction document for license amendment-review procedures called LIC-101. In April 2016, the Office of Nuclear Reactor Regulation issued an expectations memorandum to staff intended to provide additional guidance and clarity to expectations addressed in existing office guidance and practice. For example, the expectations memorandum elevates the issuance of additional questions on the same topic to divisional management to discuss the need for an additional round of RAIs before submitting them to a licensee. The memorandum also calls for the branch chief to review the draft safety evaluation report and confirm that the holes in the draft report align with RAIs. This is a change from the version of LIC-101 that officials had been using. NRC incorporated changes contained in the April 2016 expectations memorandum into a new edition—version five—of LIC-101 in January 2017. The Office of Nuclear Reactor Regulation’s management also issued a memorandum in August 2016 to all operating reactor licensees that stated, among other things, that staff will actively seek opportunities to conduct an on-site audit or a public meeting in order to reduce the number of rounds of RAIs. In the Office of New Reactors, office instruction document NRO-REG-101 provides information to guide staff in the processing of RAIs. In 2008, the office also produced a detailed pamphlet on the RAI process—called a job aid—intended to help standardize office practices, ensure proper focus in the reviews, and enhance efficiency. In October 2016, management in the Office of New Reactors issued a memorandum to staff on the effective use of RAIs in new reactor licensing reviews. According to the memorandum, all RAIs in the Office of New Reactors will be reviewed up through division management, and the office director will review samples of RAIs in an effort to keep informed of issues deemed high priority identified in reviews. This memorandum accompanied an updated RAI job aid to replace the earlier version, as well as two other job aids focused on carrying out audits and confirmatory analysis, in which NRC staff conduct an independent assessment of a licensee’s calculation. The updated RAI job aid contains some modifications to the text from the 2008 version that include, for example, instruction to division management to audit draft RAIs to assure conformance with office expectations for quality. RAI guidance for the Office of Nuclear Material Safety and Safeguards’ Division of Spent Fuel Management is contained in an instruction document referred to as SFM-3. The division issued a new instruction document in August 2016—referred to as Office Instruction 26—that is intended to provide management expectations and guidance to employees. The document calls for staff to follow the existing division guidance for RAIs and outlines new guidance that staff are required to follow as well. This new guidance includes preparing, for supervisory review, a draft safety evaluation report showing the regulatory holes that call for RAIs. The new guidance also calls for notifying management of additional rounds of RAIs and receiving management concurrence before issuance. The office’s Division of Material Safety, State, Tribal and Rulemaking Programs relies on guidance contained in Volume 20 of a multi-volume series of guidance documents on materials licenses called NUREG-1556. Volume 20 provides guidance on administrative licensing procedures and, according to officials, is currently being updated along with the other volumes in the series. Officials told us that Volume 20 is expected to be published as a draft report for comment in spring 2017 and published as a final report sometime that year. According to officials, procedures to process RAIs in the Division of Decommissioning, Uranium Recovery, and Waste Programs were first issued in 2000. These procedures include the requirement that a draft safety evaluation report be used to support RAIs, calls for RAIs to refer to specific portions of regulation, guidance, or both when issued to licensees, and encourages staff to conduct telephone conferences to discuss technical issues and possible resolution. The instruction document covers licensing as it applies to all project managers, technical reviewers, and staff within the Division of Decommissioning, Uranium Recovery, and Waste Programs. According to officials, the most recent revision to the procedures in 2009 did not include changes to specific procedures guiding the development of RAIs. In addition to guidance, NRC’s offices have practices in place intended to ensure management and staff continue to focus on improving RAIs. Officials from each of the NRC offices that issue RAIs said that their management is continually focused on improving RAIs. For example, officials from the Office of New Reactors told us there are plans to assess the revised process for developing and issuing RAIs throughout upcoming license reviews to look for additional opportunities for improvement. In the Office of Nuclear Material Safety and Safeguards, officials told us that RAIs receive attention from the management of all divisions and that office leadership is working with licensee representatives to identify ways to improve the RAI process. Officials also told us that because most of the staff are involved in the process to develop and issue RAIs, it is an essential component of their work. As a result, their work on RAIs will factor into their performance review. According to officials, NRC’s standards for employee assessments are written at a general level for almost all staff at NRC. Technical staff are evaluated against four standards: planning and implementation, problem solving and analysis, communication, and professional development and organizational effectiveness. According to NRC officials, the individual guidance developed by each office reflects the office’s own responsibilities and procedures. Guidance may differ across offices when a license application requires review by multiple technical branches; one office may issue RAIs to the licensee as they are completed by a technical branch, while another may wait to issue RAIs until all relevant technical branches have completed the initial review. For example, the Fuel Cycle Licensing Review Handbook used by the Division of Fuel Cycle Safety, Safeguards and Environmental Review in the Office of Nuclear Material Safety and Safeguards notes that if the same regulatory issue occurs in more than one technical section, the issue should be addressed in a general section rather than multiple times in each section. The handbook also encourages reviewers to issue one set of RAIs, as opposed to multiple sets. Guidance on the response times given to licensees also differs among NRC offices. Consequently, the Office of Nuclear Reactor Regulation’s guidance document, LIC-101, calls for licensees to respond to RAIs in 30 days or within a timeframe specified by the review team. Updates intended to align LIC-101 with the office’s expectations memorandum include guidance for a default response period of 30 days, an extended response period of 60 days, and approval for a longer response period if the review schedule allows. However, the Office of Nuclear Material Safety and Safeguards’ Fuel Cycle Licensing Review Handbook calls for the project manager to set a response date of 30 to 60 days. The Office of New Reactors’ guidance for RAIs references NRC regulations calling for responses within 30 days of the date of the request, and states that applicants will be encouraged to respond to questions once they have prepared their responses, rather than respond to packages of multiple questions on a set date. The Office of New Reactors’ guidance also requires that officials use email to transmit RAIs. In addition, guidance on the level of management’s review given to RAIs varies by NRC office. RAIs issued for combined license applications and early site permits in the Office of New Reactors are automatically sent to branch managers and are reviewed by both division management and the Office of General Counsel. Guidance for the Office of Nuclear Reactor Regulation states that RAIs should be reviewed by branch managers. It also calls for branch managers and staff to discuss the need for a second round of RAIs and whether alternative methods to obtain information— such as a public meeting or an audit—may be more effective and efficient. In the Office of Nuclear Material Safety and Safeguards, the level of management review is determined by the guidance of each division. For example, in the Division of Spent Fuel Management, RAIs must be submitted to branch management for review, and divisional management must be notified of additional rounds of RAIs. Nevertheless, based on our review of the guidance, the guidance is generally the same across the offices. Specifically, guidance for the different offices describes similar processes for issuing RAIs, including the reason for issuing an RAI, the procedures undertaken to develop RAIs, and time frames during the process. Guidance for all offices states that RAIs should be used to gain the information needed for making a licensing decision, and due to recent updates, most office guidance also states that RAIs be used to fill gaps in a safety evaluation report. The process for developing and issuing an RAI is also similar across all offices, and includes: (1) the development of RAIs by technical reviewers based on information contained in an application, (2) the review of proposed RAIs by management, (3) the issuance of RAIs to licensees for response, and (4) the incorporation of information received through RAIs into the safety evaluation report and final licensing decision. Additionally, guidance across all offices includes direction on setting time frames for issuing RAIs and receiving responses from licensees. NRC offices do not track the number of RAIs and do not know how many they have issued over the past 5 years, and there is no legal requirement for NRC to track the number of RAIs it issues. According to NRC officials and some licensees we interviewed, certain activities and circumstances often elicit RAIs, such as complex licensing actions and activities for which regulations are unclear. NRC offices that issue RAIs do not specifically track the number of RAIs that they issue, and there is no legal requirement for the agency to track the number of RAIs. An official from the Office of New Reactors estimated that a combined license application could have 1,000 RAIs, while a license amendment request could have few, if any, RAIs. Officials added that the number of RAIs issued in a given review varies depending on the complexity and size of the requested licensing action. Officials also said the number of RAIs per year depends on how many license applications the office receives; it can take 5 years or more to review and make a decision on a combined license application. In contrast, for plants that are licensed, officials said that NRC typically reviews 20 to 25 license amendments per year. According to officials, the Office of Nuclear Reactor Regulation reviews about 700 licensing actions per year, and officials also estimated that on average, each licensing action has 5 to 10 RAIs. Officials added that the Office of Nuclear Material Safety and Safeguards reviews about 1,800 license applications or amendments per year, with varying numbers of RAIs per action. NRC officials cannot say with certainty how many RAIs they have issued over the past 5 years, in part because the current internal tracking systems used by the Office of Nuclear Reactor Regulation and the Office of Nuclear Material Safety and Safeguards do not track the number of RAIs. Officials in the Office of Nuclear Reactor Regulation told us that they do not track the number of RAIs because RAIs are only one component in the broader licensing process. Instead, officials said, they focus more on whether the office is carrying out licensing activities in an efficient, effective manner. The Office of New Reactors has an internal tracking system, called eRAI, which is specifically configured to manage RAIs and is capable of tracking the number of RAIs per year. However, according to an official, the office does not use eRAI to track the number of RAIs. Instead, the Office of New Reactors uses eRAI to monitor RAIs associated with applications that can be up to 12,000 pages long, identify related questions, and track RAIs by regulatory issue area. Some NRC offices have been working to update their internal tracking systems for licensing actions. These updates are intended to, among other things, allow officials to better track milestone dates in the licensing action process, such as the date that an RAI response is due. For example, the Office of Nuclear Reactor Regulation is changing to a new system called the Replacement Reactor Program System, which tracks major milestones within each of the licensing reviews. Unlike the previous system, the new system can track multiple rounds of RAIs. In addition, according to NRC officials, the Division of Spent Fuel Management in the Office of Nuclear Material Safety and Safeguards is moving to a web- based tracking system from a system that only tracked milestone dates. The new system is intended to track RAIs and to help identify issues early in the process that may influence the timeliness of the review. Elsewhere in the Office of Nuclear Material Safety and Safeguards, officials in the Division of Fuel Cycle Safety, Safeguards and Environmental Review told us that they are upgrading their tracking system and will use the same one as the Division of Spent Fuel Management. Further, the office’s Division of Material Safety, State, Tribal, and Rulemaking Programs is also in the process of enhancing its tracking system by streamlining the process to allow the staff to issue licensing actions directly from the system. NRC officials told us that tracking the number of RAIs is challenging and may not reflect the role that RAIs play in the licensing process. According to officials, counting the number of RAIs may be challenging because different reviewers can refer to an RAI as a single question, or as a letter to a licensee containing several questions. Officials also told us that the number of RAIs does not capture the variance in size and complexity of RAIs; for example, NRC may request simple editorial changes that require little effort and time on the part of the licensee, or RAIs may request additional technical analyses that require more effort and time to address. NRC officials told us that receiving RAIs as part of a licensing action is not unusual, and officials and licensees we interviewed said that RAIs are often issued for the following activities and circumstances: complex licensing actions, activities for which regulations are unclear, new activities, and when the initial application does not contain adequate information or detail. Complex licensing actions: NRC officials told us that licensing actions and the associated RAIs vary greatly in complexity, and the number of RAIs issued may vary depending on the complexity of the review. Some actions are simple and may take 40 to 80 hours to review, while others are more complex and may take more than 5,000 hours for review. Many licensees we interviewed stated that they would expect to receive more RAIs for more complex licensing actions. For example, a licensee we interviewed told us that most of the RAIs the company receives are related to technical specifications, which are NRC’s standardized requirements for its approved reactor types. Officials from this licensee said they may receive additional questions for technical specifications and issues that are plant-specific, such as differences in equipment when compared to other plants with the same reactor type, and more detailed drawings and information about the plant’s equipment. Conversely, another licensee told us that a simple license amendment request for a name change on a license did not elicit any RAIs. Unclear regulatory guidance: Some licensees we interviewed also said that they would expect to receive more RAIs for activities for which regulations are unclear and may require increased coordination between NRC and the licensee. One licensee described an instance where guidelines pertaining to the licensing action were not clear, and it took several years and additional RAIs to try to reach agreement with NRC. NRC officials told us that licensees are most likely to receive RAIs in cases where they request an exception to regulatory guidance. NRC officials and licensees said that a request for an exception can occur, for example, when a licensee asks NRC for a license to use a construction material that is not referenced in regulatory guidance. Further, NRC officials said that applications for new reactors typically elicit RAIs, particularly when technology proposed by licensees does not align with regulations, such as the shift from analog instrumentation and controls used in most operating reactors to digital instrumentation and controls proposed for new reactors. First-of-its-kind activities: NRC officials told us that a new type of licensing action may require more RAIs. Half of the licensees we interviewed told us that they received RAIs when requesting a license for an activity that is the first of its kind or is setting a precedent. For example, a licensee stated that the company received RAIs for a license amendment that was one of the first submitted to NRC for a particular activity. Quality of the application: Officials told us that the number of RAIs associated with each license application often depends on the quality of the application. For example, a license application or response to an RAI that does not contain adequate information can result in more than one round of RAIs. Conversely, an application that is comprehensive and addresses all of the requirements outlined in NRC guidance is less likely to receive RAIs, and if RAIs are issued, they are less likely to receive more than one round of RAIs. Some licensees told us that they may receive RAIs for issues that may have been addressed in another application or, as one licensee stated, were otherwise obvious; licensees noted that this was likely because NRC wanted that information included as part of the docket. The nongeneralizable sample of licensing actions that we reviewed reflected several different types of licensing actions and contained RAIs that varied in format. We reviewed three licensing actions that contained RAIs—each one representing one of the three offices that issue RAIs. One action was for a license renewal, another was for a license amendment request, and the third was for a relief request. One of the licensing actions received four RAIs sent in two separate e-mails several weeks apart. NRC officials described these as “official” RAIs, which they followed with an email identifying apparent editorial errors in the application. In another example, NRC sent a licensee two draft RAIs in advance of the formal RAI submission. With regard to the types of information that NRC asked in its RAIs, one RAI asked for clarification on technical specifications the licensee used to support its request. Another asked for specific information about the equipment used, including name and model number, as well as details concerning maintenance. In the third example, NRC asked for clarification about testing completed for a particular piece of equipment in order to approve a license renewal. Because this was a nongeneralizable sample, our results are not generalizable to all licensing actions but provide illustrative examples of the types of information included in RAIs. Many of the licensees we interviewed were generally satisfied with the current process to develop and issue RAIs and identified certain strengths. NRC officials and licensees also identified two common weaknesses in the process to develop and issue RAIs, weaknesses that NRC has made recent efforts to address. Many of the licensees we interviewed expressed satisfaction with NRC’s current process to develop and issue RAIs and acknowledged the role of RAIs in the licensing process. Some said that they viewed them as a natural part of interacting with a regulator. For example, one licensee said that RAIs are needed to allow for formal communication between NRC and licensees on issues that may arise again in future licensing actions. Some licensees said that the experience with NRC had been positive, and another stated that the RAI process worked well for completing licensing actions. Licensees identified NRC guidance as a strength of the RAI process. Most licensees we interviewed told us that they found NRC guidance to be helpful; such guidance includes regulatory documents, procedural documents, and memorandums. For example, the majority of the licensees we interviewed that worked with the Office of Nuclear Reactor Regulation said that the office’s April 2016 expectations memorandum was a positive step by the agency and an improvement in the RAI process. Specifically, one licensee told us that the policy of ensuring that RAIs ask for information needed to fill a gap in the safety evaluation report—as outlined in the memorandum—was an appropriate procedure. Some licensees also said that they found it particularly useful when NRC reviewers identified for them specific passages of a guidance document relevant to the licensing action or RAI. Licensees we interviewed also identified NRC’s openness to communication and engagement as a strength of the RAI process. Most licensees we interviewed said that communication with NRC staff during the process to develop and issue RAIs was helpful, including pre- application meetings, informal interactions via phone or e-mail, and coordination with project managers. As mentioned above, pre-application meetings provide an opportunity for the licensee and NRC to clarify potential issues or questions before the initial license application is submitted and RAIs are issued. Of those licensees we interviewed who participated in a pre-application meeting, the majority said that the meeting helped to either resolve or clarify issues before the acceptance review. Some licensees said that pre-application meetings were particularly helpful when NRC staff assigned to the review participated, with one licensee stating that it was critical for the staff member who develops RAIs to be present. A licensee also stated that participating in a pre-application meeting significantly reduced the number of RAIs issued later in the process. Additionally, some licensees told us that informal interactions via phone or e-mail with NRC staff also helped to resolve issues quickly, as opposed to clarifying or resolving issues through formal correspondence. Similarly, several licensees noted that the active engagement of project managers in the review process improved the efficiency of the review and the quality of RAI questions. For example, officials from one licensee said that in recent years NRC’s project manager e-mailed draft RAIs to them, which allows the licensee to review them in advance, ask clarifying questions, and propose response times. In another case, a licensee told us that a project manager included divisional management in a conference call to discuss RAIs, which resulted in NRC withdrawing some RAIs. In addition, several licensees we interviewed noted NRC’s responsiveness to industry operational issues and time constraints in the review process as a strength. Several licensees described instances in which operational issues or time constraints required flexibility from NRC, and they told us that NRC worked with them to ensure uninterrupted operation or service. Some licensees told us that NRC extended the response time required for RAIs when licensees asked for additional time. In another instance, a licensee described a case where NRC expedited the review process to prevent a disruption in patient medical care that relies on radiological material. An industry interest group representative we interviewed told us that both industry and NRC should take steps to ensure that the recent improvements to the process to develop and issue RAIs are maintained going forward. Licensees and NRC officials that we interviewed identified weaknesses in the RAI process, including two commonly mentioned ones: (1) a gap between NRC’s expectations and licensees’ understanding of what should be included in a license application and (2) staff departure from guidance that leads to RAI questions that appear to be redundant or beyond the scope of the review. Gap in expectations and understanding: NRC officials and licensees whom we interviewed told us that a gap between NRC’s expectations and licensees’ understanding of license application content can be a weakness of the RAI process. Both NRC officials and licensees stated that inconsistencies may exist between NRC’s expectations and licensees’ understanding of what should be included in a licensing application, especially in cases of complex or new activities. According to NRC officials, such inconsistencies can lead to reviewers’ using RAIs to gather the information needed to make a licensing decision. NRC officials said that varying levels of understanding regarding expectations may result in confusion for licensees and may incentivize them to exert fewer resources when developing an initial application. According to one NRC official, licensees may submit an application containing enough technical information to pass the acceptance review with the understanding that NRC will develop RAIs to address unresolved issues in the application. Officials added that the standard and level of detail required for issuing a licensing action is more stringent than that for an acceptance review. For half of the licensees, expectations have become clearer in the last several years as a result of increased communication with NRC. However, several licensees we interviewed identified unclear or inconsistent expectations as a current concern. For example, one licensee described a case where a license amendment received RAIs on a new activity for which NRC did not have permanent guidance in place. The licensee rescinded the license amendment request rather than expend the resources needed to answer RAIs according to NRC’s interim guidance. The Office of New Reactors, the Office of Nuclear Reactor Regulation, and the Division of Spent Fuel Management in the Office of Nuclear Material Safety and Safeguards have made recent efforts to address inconsistencies between NRC’s expectations and licensees’ understanding by emphasizing greater communication between review staff and licensees. The Office of New Reactors placed more emphasis on the pre-application period in which the NRC review team works with licensees to resolve questions and potential issues that otherwise may necessitate formal RAIs. In addition, through its April 2016 expectations memorandum, the Office of Nuclear Reactor Regulation’s management is encouraging project managers and review staff to engage in increased communication with licensees to resolve questions, in addition to placing increasing emphasis on acceptance review. Division of Spent Fuel Management leadership has also made recent efforts to encourage more frequent conversations through updated guidance. Some licensees we interviewed recognized NRC’s efforts, and one licensee told us that NRC officials have recently been more receptive to discussing RAIs over e- mail, a practice which has helped to make the process more efficient. It is too soon to tell whether these initiatives will address the gap in expectations between NRC and licensees in the long term. Staff departure from guidance: NRC officials and licensees both told us that some staff may depart from guidance by issuing redundant or unrelated RAIs, which may require additional time and resources for the licensee to address. According to officials and licensees, an RAI is redundant if the information requested is contained in or could be inferred from information contained in the original license application, other correspondence, or a response to a previous RAI. Likewise, an RAI is unrelated to the application if the information requested is not necessary for making a regulatory decision or filling a gap in the safety evaluation report. NRC officials said that ensuring staff adherence to internal guidance regarding appropriate RAIs can be challenging, and many of the licensees we interviewed identified the influence of individual staff reviewers as a weakness of the process to develop and issue RAIs. Half of the licensees we interviewed said that they noticed questions that were either redundant or seemed to appear unrelated to a regulatory requirement, but which may have been intended to satisfy the individual reviewer’s curiosity. Some licensees also said that inexperienced reviewers may ask redundant questions or revisit issues that have already been resolved and codified in the licensing document through prior communication with NRC. According to licensees, redundant or out- of-scope RAIs create additional work for them, and most of the licensees interviewed identified the impact of RAIs on resources as a related weakness of the RAI process. For example, one licensee reported receiving questions outside of the scope of the license application that required additional analyses and work—nearly doubling the length of the review and costing the licensee almost double the amount in fees budgeted for the review. In an effort to mitigate concern over the influence of individual staff reviewers, the Office of New Reactors, Office of Nuclear Reactor Regulation, and the Division of Spent Fuel Management in the Office of Nuclear Material Safety and Safeguards recently updated guidance and introduced more management review of RAIs. As mentioned above, updated NRC guidance includes an increased focus on ensuring staff compliance with the RAI process. For example, the offices and division cited above have recently updated internal guidance to clarify the expectation that staff reviewers use RAIs to fill holes in a draft safety evaluation report. Guidance updated by all three offices also includes calls for elevating questions at least to divisional management: in the Office of Nuclear Reactor Regulation and the Division of Spent Fuel Management, all second-round RAIs require division management approval; and in the Office of New Reactors, all RAIs require divisional management approval and the office director reviews samples of RAIs on high priority issues. Half of the licensees we interviewed expressed that these efforts represent an improvement in the RAI process. Several licensees specifically described the expectations memorandum issued by the Office of Nuclear Reactor Regulation as an improvement, and officials from one licensee noted that they have seen progress with NRC’s management intervening when potential unnecessary questions are identified. Because these efforts were made recently, it is too early to assess the effectiveness of such approaches to mitigating the influence of individual staff reviewers. We provided a draft of this product to NRC for comment. NRC generally agreed with our findings and provided technical comments, which we incorporated as appropriate. NRC’s comments are reprinted in Appendix I. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to appropriate congressional committees and the Chairman of the Nuclear Regulatory Commission. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. In addition to the individual named above, Hilary Benedict (Assistant Director), Bridget Grimes, and Rachel Rhodes made key contributions to this report. Tim Bober, Kevin Bray, Antoinette Capaccio, Cindy Gilbert, Timothy M. Persons, and Dan Royer also made important contributions.
NRC issues RAIs to obtain information in licensing requests to ensure that officials can make a fully informed, technically correct, and legally defensible regulatory decision. RAIs are necessary when the information was not included in an applicant's initial submission, is not contained in any other docketed correspondence, or cannot reasonably be inferred from the information available to agency staff. NRC's use of RAIs has come under scrutiny in the past. For example, NRC's Inspector General, in a 2015 report, cited concerns about RAIs, including the amount of time it took to complete the RAI process and the resources required to do so. GAO was asked to review how NRC uses RAIs. This report examines (1) NRC's guidance for developing and issuing RAIs and how it differs across offices; (2) how many RAIs NRC has issued over the past 5 years and the kinds of activities that elicit RAIs; and (3) strengths and weaknesses of NRC's processes to develop RAIs identified by NRC and licensees and the actions NRC is taking to address concerns. GAO examined agency guidance documents and selected licensing actions containing RAIs. GAO interviewed NRC officials and selected licensees. GAO randomly selected licensing actions and licensees from a sample of recent licensing actions that included cases from each of NRC's RAI-issuing offices. At the Nuclear Regulatory Commission (NRC), individual offices that issue requests for additional information (RAI) each have their own guidance that is generally the same across the offices. NRC offices have some efforts underway to update their guidance. These efforts are intended to improve oversight of RAIs and include an increased focus on oversight of RAIs and on staff compliance through managerial review. For example, one of the offices that issues RAIs calls for management to discuss the need to send a licensee additional questions on the same topic before doing so. NRC offices that issue RAIs do not specifically track the number of RAIs that they have issued and do not have a comprehensive accounting for the last 5 years, although one office has a system capable of tracking the number of RAIs. Information from NRC officials and licensees GAO interviewed suggests that certain activities and circumstances often elicit RAIs. There is no legal requirement for the agency to track the number of RAIs; however, offices are updating their internal tracking systems in order to improve information on their licensing activities. Receiving RAIs is not unusual, particularly for certain activities such as complex licensing actions and activities for which regulations are unclear, according to officials. In such cases, increased coordination between NRC and the licensee may be required to resolve certain issues. Licensees GAO interviewed were generally satisfied with the RAI process, identifying strengths and two common weaknesses, and NRC has made recent efforts intended to address these weaknesses. Some licensees noted that they see RAIs as a natural part of interacting with a regulator and identified NRC's openness to communication and engagement as a strength of the RAI process. Two common weaknesses that licensees cited are a gap between NRC's expectations and licensees' understanding of what to include in their applications, and staff departure from guidance. NRC offices have made recent efforts to address these issues. For example, to address inconsistencies between NRC's expectations and licensees' understanding, NRC offices are emphasizing greater communication between review staff and licensees. GAO is not making any recommendations. NRC generally agreed with GAO's findings.
Nursing homes provide a residential setting and a range of health care services for individuals who can no longer care for themselves because of physical or mental limitations. According to the most recent National Nursing Homes Survey (NNHS), approximately 90 percent of nursing home residents were age 65 and older and more than two-thirds were female. ICFs-MR are intended to provide a residential setting for treatment, rehabilitation, and supervision of people who have mental retardation or other disabilities, such as seizure disorders or behavior problems. In 2005, approximately 85 percent of ICF-MR residents were from 22 to 65; only 7 percent of the total resident population was over 65 years of age. In addition, unlike the nursing home population, the majority of ICF-MR residents were male. Approximately 1.5 million individuals lived in Medicaid- and Medicare- certified nursing homes and ICFs-MR in 2005. Federal Medicaid and Medicare funds accounted for approximately 33 percent of total spending on nursing homes, and the remaining funds were from a combination of state, local, and private sources in 2003. In the same year, ICFs-MR, which are funded almost exclusively by Medicaid, received about 58 percent of their total funding from federal Medicaid funds and the remainder from state Medicaid dollars. Medicaid, a joint federal-state program that finances health care coverage for certain categories of low-income individuals, is the primary payment source for long-term care services for older people with low incomes and limited assets. Medicaid pays for an array of long-term care services, including services to assist people with activities of daily living like eating, dressing, bathing, and using the bathroom. In contrast, Medicare, which covers a variety of health care services and items for individuals who are 65 or older, have end-stage renal disease, or are disabled, does not pay for most long-term care services. Medicare covers short-term skilled nursing care following a hospital stay. To qualify for Medicare or Medicaid funding, these long-term care facilities must meet certain federal requirements. For example, they are required to conduct resident assessments that examine areas such as demographic information, cognition, mood and behavior, psychosocial well-being, health conditions, and physical functioning. For example, the Preadmission Screening and Resident Review, which is required by federal law to determine whether the potential resident needs nursing home care, includes an assessment of mental capacity. Although federal regulations require that a resident assessment be conducted prior to admission to ICFs-MR, there is no standardized assessment tool and admission can be based on a prior assessment by an outside source. Individuals being admitted to an ICF-MR generally meet certain criteria, including having an intellectual functioning level below 70 to 75 and significant limitations in two or more adaptive skill areas. In addition, at a minimum, resident assessments are conducted annually by nursing home and ICF-MR facility staff after admission in order to continually address a resident’s needs. For each resident for whom they receive Medicare or Medicaid funding, these long-term care facilities are also required to develop a plan of care that addresses the resident’s medical, social, and other needs, as determined by the resident assessment. Long-term care facilities are also required to protect residents’ rights and privacy. In addition, the Privacy Rule issued under HIPAA provides individuals with protections regarding the confidentiality of their health information and restricts the use and disclosure of individuals’ health information by health care providers, including nursing homes and ICFs-MR. As a condition of Medicare or Medicaid participation, long-term care facilities must report incidents of abuse according to state requirements. CMS defines abuse as the willful infliction of injury, unreasonable confinement, intimidation, or punishment with resulting physical harm, pain, or mental anguish. Physical abuse generally includes hitting, slapping, pushing, and sexual abuse, which is nonconsensual sexual contact or nonconsensual sexual involvement of any kind. Although the commission of a sexual offense may result in an incident of abuse, a uniform definition of sexual offense does not exist, and states define sexual offenses in their respective criminal codes. Some examples of sexual offenses include rape, sexual assault, and incest. In some states, related sexual offenses include child pornography and willful indecent exposure in public. Federal statute established the Jacob Wetterling Crimes Against Children and Sex Offender Registration Program in 1994. The statute required every state to have a program to register sex offenders by September 1997, and required the Attorney General to provide states with guidelines for developing their programs. At a minimum, an individual convicted of a criminal offense against a minor or of a sexually violent offense must register a current address for 10 years following his/her release from prison or placement on parole, supervised release, or probation. In addition, an individual who has one or more prior sexual offense convictions, has been convicted of an aggravated offense, or is determined to be a sexually violent predator must register a current address for life. States may impose more stringent registration requirements on a broader class of offenders than required by federal law. The law also mandates that registered sex offenders verify their addresses at least annually and that registered offenders classified as sexually violent predators verify their addresses quarterly. Registered sex offenders must notify local law enforcement officials within their state of address changes, and those who move to a different state must comply with registration requirements in the new state. States that do not comply with the Wetterling Program requirements are subject to a 10 percent reduction in their Byrne Formula Grant law enforcement funding. The statute establishing the Wetterling Program was amended twice in 1996. The first amendment, Megan’s Law, required states to release information about registered sex offenders when necessary to protect the public, but this law did not specify how states must give notification. The second amendment, the Pam Lychner Sexual Offender Tracking and Identification Act of 1996, mandated the FBI’s creation of a national database now known as the NSOR. According to the FBI, this national database combines sex offender registries from all of the states to help law enforcement officials track sex offenders on a national level. Research on sex offender recidivism suggests that the majority of individuals previously convicted of sex offenses do not commit additional sex offenses, with one such study estimating that about 14 percent had a new sex offense charge or conviction within 5 years of their release from prison, increasing to 27 percent after 20 years. At the same time, however, research also indicates that sex offenses are underreported. While it is difficult to predict re-offense for any individual, certain factors such as sexual deviancy, antisocial orientation, and an adverse family environment may contribute to a higher likelihood of a re-offense. Those who have strong social supports, such as a supportive family and a stable job, may be less likely to re-offend. In addition, the likelihood of re-offending may diminish as the sex offender ages. Federal law requires that registered sex offenders be tracked on a national and state level; however, parolees are generally monitored and supervised by each state. Individuals released from prison prior to the completion of their sentences may be subject to certain conditions and supervised as parolees for a specified period. Typically the length of time states set for parole is 1 to 3 years, although certain crimes and sentencing situations may require more or less time. An individual can be convicted of a range of crimes from fraud or forgery to murder and be eligible for parole. As of December 2003, about 775,000 adults were on parole from federal and state prisons nationwide. Using the NSOR, we identified 683 registered sex offenders living in long- term care facilities during 2005. However, this understates the national prevalence of convicted sex offenders residing in long-term care facilities for a number of reasons. While the NSOR is a national database that compiles information about registered sex offenders submitted by all 50 states and the District of Columbia, it does not include convicted sex offenders who are not on state registries, including those sex offenders who are required by law to register but choose not to comply. It also does not include all registered sex offenders, as states have had varying degrees of difficulty submitting their records to the NSOR because of technical problems, lack of resources, or inability to provide the required FBI number for certain offenders. Because there is no national data source on parolees that includes address information, we also obtained parolee databases from the eight states we reviewed and identified 204 offenders on parole for non-sex offenses living in long-term care facilities. The risk of abuse within nursing homes or ICFs-MR by residents with prior convictions is unclear because states we reviewed do not report the prior convictions of residents who commit abuse; however, facility administrators we interviewed more frequently expressed concern about the potential for abuse by residents with cognitive impairments or mental illness than by residents with prior convictions. Using the NSOR, we identified 683 registered sex offenders living in long- term care facilities during 2005, representing about 0.05 percent of the total 1.5 million residents of nursing homes and ICFs-MR. (See app. II.) Of the approximately 16,000 nursing homes and 6,600 ICFs-MR that participate in Medicare or Medicaid, we identified 3 percent of nursing homes (470) and 0.7 percent of ICFs-MR (46) as housing at least 1 registered sex offender during 2005. About 88 percent of the registered sex offenders we identified resided in nursing homes, while the remaining 12 percent resided in ICFs-MR. Sex offenders living in nursing homes were younger than the general nursing home population, while those in ICFs-MR had a similar age distribution as the general ICF-MR population. About 57 percent of registered sex offenders we identified as living in nursing homes were under age 65, compared to about 10 percent of the general nursing home population, and 30 percent were under age 50. Most sex offenders—95 percent—identified as living in ICFs-MR were under age 65, which is similar to the age distribution in the general population of these facilities. Similarly, nearly all—99 percent—registered sex offenders we identified as residing in long- term care facilities were male, which is consistent with the gender of registered sex offenders overall. Among registered sex offenders for whom we had information on the nature of their crimes, the majority of convictions were for rape and sexual assault of adults and minors. The number of offenders that we identified as living in long-term care facilities is understated because of shortcomings in the data. Specifically, although national in scope, the NSOR does not include certain convicted sex offenders who are not on state registries because the registries did not exist at the time they were convicted or released from prison or because their registration period has expired. The NSOR also does not include all of the records of sex offenders who are registered in the states’ registries because some states have had difficulty submitting their records to the NSOR. NSOR records for convicted sex offenders who chose not to comply with registration requirements may be incomplete or missing. In addition, since no national data source for parolees exists that includes parolee residence information, our data only include numbers of parolees from the eight states we reviewed. While some states already had sex offender registries in place, the Wetterling Program statute mandated that all states implement a registry by September 1997. Most state registries only include those sex offenders convicted or released from prison after a specified date, generally after 1990. Consequently, those convicted or released before the specified date were not required to register and therefore are not included in our analysis. This limitation may help explain the age distribution of registered sex offenders we identified as living in nursing homes. While the majority of offenders identified in nursing homes were under the age of 65, this could be a consequence of the limited period that sex offender registries have existed rather than an accurate reflection of the age distribution of convicted sex offenders living in nursing homes, since many elderly sex offenders would not be registered if their convictions predated the implementation of their state’s registry. The nearly 700 registered sex offenders we identified through the NSOR database as living in long-term care facilities also do not include convicted sex offenders whose registration period expired or whose information was missing because they did not comply with registration requirements. While noncompliance is difficult to track, four of the reviewed states provided us with estimated noncompliance rates ranging from 4.5 percent to 25 percent. Similarly, the advocacy organization, Parents for Megan’s Law, released estimates in 2003 that 24 percent of sex offenders nationally fail to comply with registration requirements. Sex offenders may fail to comply for several reasons, including a lack of understanding about registration requirements or to avoid the possible negative consequences experienced by some registered sex offenders, such as the loss of a job, harassment, social stigmatization, or physical assault. We found a range of submission rates by state registries to the NSOR, which suggests that the NSOR may be missing a portion of sex offenders who are registered in states. Registry administrators from the 20 states that responded to our e-mail questionnaire estimated their submission rates to be from 46 percent to 100 percent of the total number of records in their state registries. Most reported that at least 80 percent of their records were submitted, while 2 states reported that they were only able to submit about half of their records. We also compared the total number of sex offenders included in the state registries to the number included in the NSOR for 7 of the 8 states we reviewed. (See table 1.) The NSOR included about 57 percent of sex offenders registered in these states, with submission rates ranging from 1 to 83 percent. For example, Utah had submitted about 1 percent of its registry to the NSOR. While the state intends to fully submit its registry to the NSOR in the future, it currently lacks the resources to do so, according to a state official. However, the FBI considers state participation in the national database to be in compliance with federal requirements if a state has submitted at least one record to the NSOR. A DOJ official confirmed that all states have been determined to be in compliance with NSOR submission requirements, based on FBI notifications regarding each state’s participation in the NSOR, and was not aware of any state that had been penalized with the loss of Byrne Formula Grant law enforcement funding solely on the basis of the extent of state NSOR participation. Registry administrators from among the 8 states we reviewed and the 20 additional states that responded to our e-mail questionnaire reported that several factors complicate their efforts to submit complete sex offender registries to NSOR. For example, registry administrators frequently responded that they were not able to submit records of registered sex offenders who did not have FBI numbers. FBI numbers are required by the FBI for all records submitted to the NSOR to ensure positive identification of individuals for the purposes of employment background checks. States may lack FBI numbers for several types of offenders, such as juvenile sex offenders who do not receive FBI numbers or sex offenders from other states. If a sex offender comes from out of state, his/her FBI number can be obtained from the state where the conviction occurred, but it can be labor-intensive if the other state does not cooperate or never submitted fingerprint information to establish the offender’s FBI number. Registry administrators in two of the states we reviewed estimated that in recent years about 30 percent of the records they submitted to the NSOR were rejected as incomplete. In addition, states are required to verify information, including home address, for each registered offender at least annually and quarterly for registered offenders classified as sexually violent predators, a process that can also be labor-intensive. If states are unable to verify an offender’s address information, the offender should be considered noncompliant, and the NSOR record will not be up-to-date nor reflect current address information. Some states have also experienced technical difficulties submitting their registry records to the NSOR. An FBI official told us that states that had registries prior to the creation of the NSOR had difficulty reprogramming their registry databases to conform to the NSOR formats. One of the states we reviewed did not realize until 2005 that only a fraction of its records were being submitted to the NSOR because of a technical problem, and it is currently submitting records on computer disks while making plans to implement a system for automatic electronic submission of its full sex offender registry to the NSOR. Although the FBI does not track states’ submission rates to the NSOR, it does periodically assess state participation in the NSOR and provides assistance to help states improve the comprehensiveness and accuracy of their registries. In addition to the requirement that states annually validate registry records, we were informed that the FBI conducts triennial audits of states’ participation in the NSOR. During fiscal year 2005, the FBI also conducted a fiscal audit, assessed states’ level of participation in the NSOR and requested information from states about what assistance they need to improve their participation. DOJ provides grants to help states improve their law enforcement information systems, which states have utilized for enhancements to their sex offender registries such as enabling the automatic transmission of records to the NSOR and for monitoring data accuracy. DOJ informed us that it also provides training and technical assistance to states, and that the FBI has an advisory group that is reviewing issues such as state submission of data to the NSOR and the process for the verification and validation of NSOR records. Using data provided by each of the eight states we reviewed, we identified 204 parolees as residents of long-term care facilities. (See table 2.) Because there is no national source of data on parolees that includes their home address information, our numbers are limited to the eight states and cannot be generalized as representative of all states. Among parolees for whom we had information on the nature of their crimes, the convictions were most commonly for burglary, assault, murder, or drug-related offenses. Long-term care facilities participating in Medicare or Medicaid are required to report all allegations of abuse and neglect to officials in accordance with applicable state law and, in the case of nursing homes, this includes reporting to the state. This requirement would encompass the reporting of abuse committed by staff or residents. In the eight states we reviewed, long-term care facilities do stratify reported abuse into categories, such as physical, sexual, financial, or resident-to-resident abuse; however, they do not report information on whether residents alleged to have caused abuse have prior convictions. The National Ombudsman Reporting System (NORS) also collects nursing home abuse data on a national level and includes various categories of abuse, such as incidents that occur between residents and incidents perpetrated by nursing home staff. Similar to the states we reviewed, NORS does not track whether residents alleged to have abused other residents have prior convictions. Because data are not available nationally or in our reviewed states on abuse perpetrated specifically by residents who have prior convictions, the potential risk for abuse by offenders residing in long-term care facilities cannot be accurately estimated. However, based on a number of factors, including the small percentage of facilities identified as housing offenders, the risk may not be widespread. For example, offenders residing in nursing homes or ICFs-MR who have significant physical limitations may be unable to commit abuse against other residents. In addition, research on recidivism by sex offenders also suggests that most do not re-offend and that the risk of re-offending may decline with age. In our interviews with officials of long-term care facilities, state nursing home associations, and state ombudsmen for long-term care, concern was more frequently expressed about the behavior and potential for abuse by cognitively impaired and mentally ill residents than about abuse by residents with prior convictions. Several of those interviewed mentioned they were concerned about the potential for abuse by residents with Alzheimer’s disease or dementia, a disease for which their behavior may change significantly after their admission and original assessment. The administrator of a facility in Ohio that specializes in residents with behavioral issues and that has housed multiple offenders said that he has had fewer problems with his residents who are identified sex offenders than with other residents who have behavioral problems. Several sources, including ombudsmen, a researcher, and a nursing home advocate, suggested that a resident’s behavioral issues are sometimes not fully disclosed to a nursing home upon admission or that some nursing homes with low occupancy may be more likely than others to accept mentally ill patients in order to increase their occupancy levels. Long-term care facility officials we interviewed, some of whom knew they have had offenders as residents and some of whom spoke hypothetically, said they would use their judgment to determine whether a registered sex offender or parolee could appropriately be cared for in their facilities. Several long-term care facility administrators told us that if they discovered a resident was an offender, they would evaluate the potential risk posed by that individual on a case-by-case basis. For example, the facility administrator may determine the degree of safety risk on the basis of whether the offender’s health status is such that the individual cannot move independently. If the administrator determines that the risk is greater than the long-term care facility can manage, the facility may choose not to admit the offender. Federal law requires state law enforcement agencies to release relevant information about registered sex offenders when necessary to protect the public, but we did not identify a similar federal requirement pertaining to the parolee population. The federal requirement for registered sex offender notification allows states to implement this requirement at their discretion, within broad federal guidelines. Consequently, the extent to which states’ community notification laws apply to all registered sex offenders or explicitly include nursing homes and ICFs-MR varies. Absent direct notification, these facilities may not know they house offenders or may only become aware of offenders through other means. For example, in the case of registered sex offenders, facilities may identify some offenders by reviewing publicly available Web sites, while for parolees, they may become aware of the person’s criminal background from a parole officer. When facility residents are known offenders, differing interpretations exist among states, industry, and long-term care facility officials as to whether sharing information about their prior convictions may violate the Privacy Rule issued by HHS under HIPAA. Megan’s Law, a 1996 amendment to the Wetterling Program statute, required each state to release information about registered sex offenders when necessary to protect the public. The law applied specifically to registered sex offenders and not to convicted sex offenders who were not obligated to register. Although Megan’s Law stipulated that information about the victims of registered sex offenders was not to be released, it otherwise did not specify the information to be disseminated about registered sex offenders, did not mandate that community notification be uniform for all registered sex offenders, and did not specify how states were to release information. Consequently, states’ community notification laws vary, particularly in terms of the extent to which notification by law enforcement entities applies to all registered sex offenders. Such variation was evident in the notification laws of the eight states we reviewed. While two states we reviewed—Illinois and Utah—apply community notification requirements to all registered sex offenders uniformly in each state, the community notification requirements in the remaining six states—California, Florida, Minnesota, New Jersey, Ohio, and Oklahoma—vary depending on the crime committed by the registered sex offender. For example, New Jersey classifies its registered sex offenders into three categories based on their assessed risk of re-offending. For sex offenders determined to be lowest risk, state law requires notification of law enforcement agencies. In contrast, for the highest risk sex offenders, the law requires notification of additional entities, including schools, religious and youth organizations, and those likely to encounter the offender. Similarly, Florida’s law explicitly requires broad community notification when individuals designated to be sexual predators reside in the community, but it does not require broad notification for other sex offenders. Variation also exists in the extent to which state community notification laws explicitly require the notification of long-term care facilities. Four states we reviewed—California, Illinois, Minnesota, and Oklahoma— passed laws in summer and fall 2005 that specified long-term care facilities as entities to be notified for at least some registered sex offenders who entered them.28, 29 Notification in these states is conducted by individual facility officials, state or law enforcement officials, or registered sex offenders themselves. For example, Illinois’ law requires long-term care facilities to determine whether each resident or potential resident is a registered sex offender and to notify staff, residents or their legal guardians, and facility visitors when offenders are residents. Similarly, Oklahoma’s law requires notification of these facilities by several methods. For example, the Department of Corrections must notify the Department of Health when any person in its custody seeks placement in these facilities, and the Department of Health must then notify the facility of the potential for the placement of a registered sex offender. When residents are determined to be registered sex offenders, information about them must be displayed in the facility in an area that is accessible to staff, visitors, and residents. The law in California also requires state officials to notify long-term care facilities when registered sex offenders are released to them from the Department of Corrections and Rehabilitation, the State Department of Mental Health, or other state-operated places of confinement. The law does not provide for such notification when sex offenders enter long-term care facilities from the community. Unlike other states we reviewed, Minnesota’s law requires registered sex offenders to disclose their status if seeking admission to long-term care facilities. Upon receiving such notification from certain registered sex offenders, long- term care facilities are responsible for sharing this information with other residents or their legal guardians. Minnesota also requires law enforcement officials to notify health care facilities if they become aware that a registered sex offender has been admitted for care. 2005 Cal. Adv. Legis. Serv. c. 466 (Dearing); 2005 Ill. Legis. Serv. 94-163 (West); 2005 Minn. Laws c. 243.166; 2005 Okla. Sess. Laws Serv. c. 465 (West). Requirements in state community notification laws specifying that nursing homes and ICFs-MR be notified about registered sex offenders who were residents appear to be a recent trend. For example, a 2001 review of state community notification laws by the Bureau of Justice Statistics found that states generally did not notify nursing homes or ICFs-MR when offenders entered the facilities. The other four states we reviewed—Florida, New Jersey, Ohio, and Utah— do not specifically require the notification of long-term care facilities when registered sex offenders enter them. Long-term care facilities in these states, or in states where community notification of such facilities is not required for all registered sex offenders, may not be aware of residents who are offenders or must rely on other methods to identify such individuals. For instance, administrators we interviewed at 8 of the 29 long-term care facilities indicated that one or more registered sex offenders had lived in their facilities for some period. Each of these 8 long- term care facilities was notified about the registered sex offenders, although the method of notification varied. For example, while 4 facilities were notified before the offenders entered them, either by offenders’ family members or the state department of corrections, the 4 remaining facilities were notified after the registered sex offenders were admitted, either by local law enforcement officials who were verifying sex offenders’ residential addresses or by an advocacy group conducting research on registered sex offenders living in certain long-term care facilities. Long-term care facilities may access states’ publicly available sex offender registry Web sites to determine where registered sex offenders reside. A 2003 amendment to the Wetterling Program statute required states to maintain a publicly available Web site with information about registered sex offenders. The law did not provide instruction on how these Web sites should be designed or what specific information should be included. Depending on the state, these Web sites provide varying amounts of information to the public about registered sex offenders. For example, the Web site registry in each of the eight states we reviewed included some address information for all or a portion of the state’s adult registered sex offenders. Five states we reviewed—Florida, Illinois, Ohio, Oklahoma, and Utah—provided the full residential address of all the state’s adult registered sex offenders, while three others—California, Minnesota, and New Jersey—included certain registered sex offenders on their Web sites and in some cases did not always list their full addresses. For example, Minnesota separates offenders into three levels and includes Level 3 offenders—those deemed predatory or most likely to re-offend—on its Web site. Approximately 6 percent of the registered sex offenders in this state who are living in the community are assigned the highest risk level. Similarly, New Jersey includes certain moderate and all high-risk registered sex offenders on its Web site, which, according to a state official, represents about 16 percent of all registered sex offenders in the state. In California, a state official told us that its Web site registry includes at least some address information for approximately 74 percent of the state’s registered sex offenders, including full address information for about 57 percent who committed crimes considered to be the most serious. The remaining approximately 26 percent of the state’s registered sex offenders are not posted on the Web site because they committed less severe offenses or are excluded from the Web site for various reasons, such as not being designated sexually violent predators. In addition, for the registered sex offenders listed on the Web sites of the eight states we reviewed, information is included about the crimes registered sex offenders committed; their names, nicknames, or aliases, when applicable; date of birth or age; and race or ethnicity. While the NSOR database is not directly accessible by the general public, long-term care facilities can access the recently developed National Sex Offender Public Registry maintained by the DOJ. This Web site, which was first launched in May 2005, seeks to compile public sex offender registry information available through state Web sites, and as of January 2006, it included public registry data from all but two states. Although this Web site provides the public with one-stop access to states’ online sex offender registries, it may be of limited usefulness because states’ sex offender registry Web sites, as described above, do not always include a comprehensive list of registered sex offenders. We did not identify a federal law specifying community notification requirements for law enforcement when parolees enter the community that was similar to the federal law for registered sex offenders. However, three of the eight states we reviewed—Illinois, Minnesota, and Oklahoma—passed laws in summer 2005 that require community notification for offenders who have committed crimes other than sex offenses, including some offenders who are parolees. Illinois’ law requires the state Department of Corrections to give some information to certain long-term care facilities when parolees or certain other offenders become residents. In addition, these long-term care facilities are required to notify the other residents when parolees reside in their facilities. In Minnesota and Oklahoma, long-term care facilities receive community notification for some individuals convicted of non-sex offenses, including some parolees, under the same requirements as those for registered sex offenders. Minnesota’s law applies to individuals convicted of some crimes, including murder or kidnapping. Oklahoma’s law requires notification for individuals who are required to register under the Mary Rippy Violent Crime Offenders Registration Act, which includes individuals convicted of crimes such as murder or manslaughter in the first degree. Department of Corrections’ officials or other authorities in each of the eight states we reviewed stated that as a matter of practice, they generally notified long-term care facilities when individuals released from prison, including parolees, are placed in such facilities. For example, according to officials in Ohio’s Department of Rehabilitation and Corrections, when an inmate who needs long-term care is paroled, a parole officer works with the facility to ensure that medical records are transferred and that a plan of care is established to meet the needs of the parolee. While the HIPAA Privacy Rule applies to individually identifiable health information, differing interpretations exist among state, industry, and long- term care facility officials we interviewed in the eight states regarding the applicability of the rule to facilities’ efforts to notify others about residents who have prior convictions, such as those who are registered sex offenders or parolees. These difficulties existed regardless of whether this information was obtained from a medical record or in another way, such as from a law enforcement official. For instance, long-term care agency officials from three states we reviewed indicated that protection of health information under the HIPAA Privacy Rule did not extend to information on prior convictions. In addition, long-term care facility and other agency officials from these and three other states we reviewed maintained that it was permissible to disclose information about a resident’s prior convictions to employees in a long-term care facility who needed to know in order to provide care for the resident. Yet other officials in six of the eight states we reviewed told us they were either unsure whether the HIPAA Privacy Rule would be violated by sharing information about the prior convictions of any offender living in a facility or that they believed the HIPAA Privacy Rule did not apply to disclosing such information about residents who are offenders. Officials at 11 of the 29 long-term care facilities we interviewed in eight states said that they were concerned they would violate the HIPAA Privacy Rule if they disclosed information about the prior convictions of offenders living in their respective facilities, but indicated that they would notify staff if they became aware of such residents. We brought the issue of long-term care facilities’ uncertainty regarding the applicability of the HIPAA Privacy Rule to the attention of an official of the Department of Health and Human Services Office for Civil Rights (HHS- OCR), the federal entity responsible for implementing and enforcing the HIPAA Privacy Rule. The official indicated that HHS-OCR has not published regulations or other guidance specifically regarding the applicability of the HIPAA Privacy Rule to the disclosure of information related to prior convictions of long-term care facility residents. However, the official stated that to the extent that such information is maintained by long-term care facilities as protected health information under the HIPAA Privacy Rule, such information could be used or disclosed for specifically permitted purposes, such as when necessary to run the health care operations of a facility or required by another federal or state law. In addition, the HHS-OCR official indicated that affected entities, such as long-term care facilities, would need to make the determination on a case- by-case basis as to whether the information is protected health information, and if so, whether its intended use or disclosure is permitted by the HIPAA Privacy Rule. The official added that long-term care facilities should consult their legal counsel if they have questions in making this determination. Although HHS-OCR does maintain a list of answers to frequently asked questions about the HIPAA Privacy Rule on its Web site, it does not cover this specific issue. In commenting on a draft of this report, Department of Corrections officials from one state we reviewed stated that it would be helpful for HHS-OCR to describe some situations in which it believes HIPAA would not be applicable with regard to the disclosure of information about offenders admitted to health care facilities. They stated that HHS-OCR’s direction to approach each case individually is not very helpful and that additional guidance would be very useful. Residents’ prior convictions alone would not be sufficient in most cases to subject them to supervision or separation requirements that differed from other residents, according to facility officials we interviewed. Administrators at only 2 of the 29 long-term care facilities we contacted indicated that they have a specific policy to separate offenders from other residents based solely on their prior convictions. Instead, long-term care facilities in the eight states we reviewed typically base supervision and separation decisions on behavioral issues that arise. For example, in the states we reviewed, several long-term care ombudsmen, industry association officials, and facility officials we interviewed indicated that the residents they have particular concerns about, in terms of behavioral problems, are those with mental illness, such as dementia, for which behaviors are apt to change as the disease progresses. Although most officials we spoke with at long-term care facilities in the eight states we reviewed do not supervise or separate offenders based solely on their prior convictions, some officials indicated the potential for a future need for residential facilities separate from long-term care facilities exclusively for certain offenders. For instance, Minnesota state officials said that some long-term care facilities may be hesitant to accept sex offenders as residents in the future, believing that certain sex offenders pose a risk to the safety of other residents. Therefore, a state commission has recommended the development of secure health care settings that would serve people who have committed certain sex offenses and who may not otherwise have access to services. In order to establish this facility, state officials are working with federal officials to resolve issues related to balancing resident rights with the safety interests of the larger community. Even if long-term care facility officials wanted to impose different supervision and separation requirements on offenders, numerous factors could affect their ability to do so. For example, as previously noted, long- term care facilities were not always notified when individuals with prior convictions entered them. Federal laws we reviewed do not require long- term care facilities to obtain information about prior convictions, and among the eight states we reviewed, only Illinois had such a requirement. In addition, assessment tools long-term care facilities in these eight states use to determine the health care needs of residents usually are not designed to gather information about prior convictions. Even if facilities obtained such information, federal and state laws that we reviewed generally do not provide for specific supervision or separation practices for facility residents with prior convictions. Each incident of resident abuse committed by offenders living in nursing homes—even if isolated or infrequent—is of concern. However, while long-term care facilities may learn that certain of their residents are sex offenders or parolees through required community notification or through other means, our findings did not indicate that residents with prior convictions are more likely than other residents to commit abuse within these facilities. Absent such evidence, it may be more appropriate to focus on residents’ behaviors versus their prior convictions when assessing the potential for committing abuse. Facility officials we interviewed more frequently expressed concerns about the behavior and potential for abuse by cognitively impaired and mentally ill residents than by offenders who may have no behavioral issues. Facilities already document problematic behaviors and assess the risk of individuals through resident assessments and care planning procedures, and when they accept residents with behavioral issues or such issues arise after admission, they must appropriately address those behaviors through the care planning for these individuals or transfer them to facilities better equipped to handle such residents. In addition, focusing on prior convictions alone can be problematic in that some offenders, such as those with certain physical impairments, likely do not pose a risk to other residents. Nonetheless, in the interest of identifying potential risks and taking precautionary measures, four states we reviewed—California, Illinois, Oklahoma, and Minnesota—enacted measures in 2005 to require notification to long-term care facilities when offenders are residents. Assessing their experiences as they implement these measures over time, including any negative impact on offenders’ access to long-term care, may be instructive for other states with similar concerns. While it was not part of our original objectives to fully evaluate the NSOR, it was our primary data source for identifying registered sex offenders residing in long-term care facilities. In the course of our analysis, we became aware that the FBI’s NSOR, which links states’ sex offender registration programs so that law enforcement agencies can identify sex offenders regardless of which state maintains their registration, was incomplete for the seven states we reviewed for this purpose. States face various barriers to fully submitting their registry records to the NSOR, including difficulties such as obtaining the required FBI number for each offender and a lack of staff resources. While the FBI has been reviewing issues related to states’ submission of records to the NSOR, it currently does not track submission rates, so the proportion of state records missing from the NSOR is not precisely known. Continued improvements in the comprehensiveness of the NSOR can enhance the ability of local law enforcement agencies to identify offenders and notify the community, including long-term care facilities, where appropriate. We recommend that the Attorney General direct the FBI to take the following two actions: assess the completeness of the NSOR, including state submission rates, evaluate options for making it a more a comprehensive national database of registered sex offenders. We provided copies of a draft of this report for comment to DOJ; HHS; and the eight states we reviewed: California, Florida, Illinois, Minnesota, New Jersey, Ohio, Oklahoma, and Utah. We received written responses from DOJ and HHS, which are included in this report as appendixes III and IV, respectively. We also received comments from California, Florida, Illinois, Minnesota, New Jersey, and Oklahoma. These agency and state comments and our evaluation follow. DOJ commented that the recommendations are unnecessary because the FBI already performs assessments of the NSOR and explores options for improvement. For example, DOJ said that the FBI conducts triennial audits of states’ NSOR participation, provides training and technical assistance to states, and seeks input from states about what assistance they need to improve their level of participation in the NSOR. DOJ characterized our evaluation as incomplete because we did not ask for information about the entire NSOR program or include a more extensive discussion in the draft report of their efforts to improve the NSOR. We obtained information about these efforts over the course of our work through interviews with FBI staff, documents available on their Web site, and through state officials. Because a comprehensive evaluation of the NSOR was not one of our reporting objectives, we did not include a complete listing of the FBI’s assistance to states in our draft report. To respond to DOJ’s comments, we revised the report to include additional information about the FBI’s initiatives to assist states in data submission and to assess the accuracy of NSOR records. Including this additional information, however, does not alter our overall finding concerning the discrepancy between state sex offender registries and states’ NSOR submissions. We acknowledge, as DOJ pointed out in its comments, that there may be valid reasons for a certain amount of discrepancy between state registries and their NSOR submissions, such as if a state chooses not to submit the records of sex offenders still incarcerated since their whereabouts do not need to be tracked by the NSOR until their release. We also acknowledge the challenge states face in maintaining current and accurate information about registered sex offenders. However, we continue to believe that the intent of the recommendations remains valid because of the evidence we analyzed for a sample of states that a significant percentage of registered sex offender records are not being successfully submitted by some states to the NSOR, despite the states’ and FBI’s efforts to date. We believe the FBI needs to track state submission rates to the NSOR as a measure of comprehensiveness that can quantify the remaining gap as well as improvements over time. We therefore revised the first recommendation to specify that we are recommending that the FBI assess state submission rates as a means of assessing the completeness of NSOR. DOJ commented on three additional issues: The risk posed by offenders residing in long-term care facilities. DOJ suggested that GAO discounted the risk posed by sex offenders residing in long-term care facilities based on insufficient evidence. We agree that the placement of a sex offender into a long-term care facility requires careful evaluation, particularly as the often-frail condition of long- term care residents makes them vulnerable to victimization. Based on our research and interviews with administrators of long-term care facilities, it is our view that the risk posed by offenders should be considered on a case-by-case basis. The presumption that offenders pose a threat to other residents could lead facilities to unnecessarily deny admission to low-risk offenders or unnecessarily seclude them from other residents. DOJ did not provide any new evidence to support its assertion that sex offenders pose a greater threat than the analysis we presented in the report. The likelihood that convicted sex offenders will commit additional sex offenses after their release from prison. DOJ objected to our citation of sex offender recidivism rates of 14 percent because they were based on only a 5-year post-incarceration period, saying the period was too short to be the basis of inferences about the likelihood that a sex offender will commit additional sex offenses, and because of evidence that sex offenses are underreported. We revised the report to clarify that the same research also cites 20-year sex offender recidivism rates of 27 percent. The usefulness of the NSOR in assisting law enforcement to identify sex offenders residing in long-term care facilities. DOJ questioned GAO’s assertion that improvements in the comprehensiveness of the NSOR would improve the ability of local law enforcement to identify sex offenders residing in nursing homes, commenting that offenders would either already be on the state registry and thus identifiable or they would not be registered and therefore not included in the NSOR. We believe that a more comprehensive NSOR would improve the tracking of sex offenders who enter long-term care facilities in the same way it improves the tracking of sex offenders generally. If offenders are registered in one state but move to another state and fail to register, their records could be in the NSOR from the original state but not on the registry of the second state. A more comprehensive NSOR thus better ensures the national tracking of sex offenders who may choose to cross state lines. HHS commented that this report brought to its attention the uncertainty that some long-term care facility officials have about the application of the HIPAA Privacy Rule to the disclosure of conviction information, as well as the issue that future guidance may be needed. HHS commented that the report will help to resolve the uncertainty about the HIPAA Privacy Rule, including clarifying that disclosures could be allowed for activities necessary for the safe operation of the facility or as required by state laws. DOJ, HHS, and the states also provided technical comments, which we incorporated as appropriate. As arranged with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days after its issue date. At that time, we will send copies to the Attorney General, the Secretary of Health and Human Services, and other interested parties. We will also make copies available to others upon request. This report is also available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7118 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. To determine the prevalence of registered sex offenders residing in long- term care facilities nationwide, we matched the addresses of registered sex offenders listed in the Federal Bureau of Investigation’s (FBI) National Sex Offender Registry (NSOR) as of January 3, 2005, with the addresses of nursing homes and intermediate care facilities for people with mental retardation (ICF-MR) listed in the Centers for Medicare & Medicaid Services’ (CMS) Online Survey, Certification and Reporting system (OSCAR) database. After standardizing address spellings and abbreviations, we used SAS, a statistical analysis program, to compare registered sex offender and long-term care facility addresses. Using a SAS function that quantifies the magnitude of difference between two text variables, we identified exact matches as well as near matches where the addresses differed slightly. We manually reviewed the addresses that differed slightly to determine if they were the same address. To evaluate the comprehensiveness of the NSOR, we requested the full state sex offender registries from 8 states—California, Florida, Illinois, Minnesota, Ohio, Oklahoma, New Jersey, and Utah—in order to compare the number of records in each registry to the number of records in the NSOR for that state. We chose these 8 states on the basis of a number of criteria, including variation in geographic location and in the number of registered sex offenders identified as living in long-term care facilities based on our preliminary analyses. California state officials did not provide us with the state’s sex offender registry in view of their concerns with state privacy laws. We also interviewed FBI staff about the management of the NSOR database. To obtain information about the administration and content of state registries, including their submission of records to the NSOR, we interviewed state registry administrators from the 8 states we reviewed and submitted a questionnaire via e-mail to the remaining 42 states, receiving responses from 20 of them. Since no national data source on parolees that includes address information exists, we obtained parolee databases from each of the eight states we reviewed. We matched parolee addresses to nursing homes and ICFs-MR in OSCAR using the same methods we used for our analysis of NSOR and state sex offender registries. We excluded some records from our analysis because there was no valid domestic address for the offender. Table 3 shows the number of records we analyzed from all data sources for both registered sex offenders and parolees, and the number of records excluded from each source because of missing, invalid, or otherwise unusable address information. To obtain information about resident abuse perpetrated by registered sex offenders and parolees, we reviewed existing research and prior GAO reports. We also interviewed long-term care facility administrators in the eight states we reviewed, including administrators at facilities with registered sex offenders as residents, as well as state department of health and industry association officials and ombudsmen. To identify facilities for administrator interviews, we initially chose four long-term care facilities in each of the eight states we reviewed. These facilities were chosen from two groups of facilities based on our initial analysis of NSOR and OSCAR data. One group comprised facilities with registered sex offender matches and the other group did not have any such matches, and when possible, we selected two facilities from each grouping. If a selected facility refused our request for an interview, we selected another facility as a replacement from the same group. If a state did not have enough facilities with or without sex offenders to complete two interviews from each group of facilities, we used facilities from the other group. In all, we interviewed administrators at 29 long-term care facilities, 11 with registered sex offender matches and 18 without matches. We achieved a 91 percent response rate for the facility interviews. To determine whether federal laws provide for notification of facility staff, residents, and residents’ families when sex offenders or parolees live in long-term care facilities or for the supervision and separation of sex offenders and parolees living in these facilities, we reviewed federal laws and interviewed Department of Justice and CMS officials. We also interviewed Department of Health and Human Services Office for Civil Rights officials about the applicability of the Health Insurance Portability and Accountability Act of 1996 Privacy Rule to the notification of facilities about residents who are sex offenders or parolees. To determine whether states we reviewed have laws or long-term care facilities have practices that provide for notification of these individuals and to determine the extent to which these individuals are subject to supervision and separation requirements that differ from those for other residents, we reviewed laws and interviewed state officials responsible for long-term care facility licensing, industry officials, long-term care ombudsmen, and the administrators at 29 long-term care facilities, which were chosen based on the criteria discussed above. We also interviewed Department of Corrections’ officials regarding their efforts to inform facilities about their placement of parolees in them. To determine what information on sex offenders is available to the public, we also reviewed state sex offender Web site registries available in the states we reviewed. The key sources used to identify registered sex offenders and parolees living in long-term care facilities included CMS’s OSCAR database, the NSOR, and parolee databases from selected states. To assess the reliability of these data, we conducted electronic data testing, reviewed relevant documentation, and interviewed knowledgeable agency officials about the data quality control procedures. We determined that while the NSOR does not include all registered or convicted sex offenders, its records are regularly audited and are sufficiently reliable for the purposes of this report. The lack of comprehensiveness of the data was evaluated and taken into account in our discussion of the results. The OSCAR database and state parolee databases were also found to be sufficiently reliable for our purposes. We conducted our work from September 2004 through February 2006 in accordance with generally accepted government auditing standards. To determine the prevalence of registered sex offenders residing in long- term care facilities nationwide, we matched the addresses of registered sex offenders listed in the NSOR as of January 3, 2005, with the addresses of nursing homes and ICFs-MR listed in CMS’s OSCAR database. Using this methodology we identified 683 registered sex offenders living in long-term care facilities. The number of registered sex offenders identified as residing in long-term care facilities in each state varied considerably, ranging from 0 to 144, as demonstrated in table 4. In addition to the contact named above, Susan T. Anthony, Assistant Director; George Bogart; Katherine Crumley; Michaela M. Monaghan; Elizabeth T. Morrison; Sari B. Shuman; and Kara Sokol made key contributions to this report.
Approximately 23,000 nursing homes and intermediate care facilities for people with mental retardation (ICF-MR) receive federal Medicare and Medicaid funding. Media reports have cited examples of convicted sex offenders residing in long-term care facilities and, in some cases, allegedly abusing other residents. Given concerns about resident safety, GAO was asked to assess (1) the prevalence of sex offenders and others on parole for non-sex offenses living in long-term care facilities and the extent of any abuse they may have caused, (2) the legal requirements for notifying facilities and others when offenders are residents, and (3) the extent to which facilities have different supervision and separation requirements for offenders. GAO analyzed a national database for sex offenders and analyzed state databases in a sample of eight states for sex offenders and parolees. By analyzing the FBI's National Sex Offender Registry (NSOR), which is a compilation of sex offender registries submitted by all states, GAO identified about 700 registered sex offenders living in nursing homes or ICFs-MR during 2005. Most identified sex offenders were male, under age 65, and living in nursing homes, and represented 0.05 percent of the 1.5 million residents of nursing homes and ICFs-MR. About 3 percent of nursing homes and 0.7 percent of ICFs-MR housed at least 1 identified sex offender during 2005. However, these estimates are understated due to data limitations. For example, because of a lack of resources or an inability to comply with certain FBI reporting requirements, states have had varying degrees of difficulty submitting their full state registries to the NSOR. While the FBI does not track NSOR submission rates, GAO compared sex offender registry data from seven of the eight states reviewed to NSOR data and found that the NSOR data included about 57 percent of sex offenders registered in these states, with submission rates ranging from 1 percent to 83 percent. Because a national data source on parolees that included address information was not available, GAO also obtained parolee databases from the eight reviewed states and identified 204 offenders on parole for non-sex offenses living in long-term care facilities. GAO could not determine the overall risk that registered sex offenders and parolees pose to other residents in long-term care facilities because offender status is not tracked with abuse reporting. Facility administrators expressed greater concern over the risk posed by cognitively impaired or mentally ill residents. Federal law requires state law enforcement agencies to release relevant information about registered sex offenders when necessary to protect the public, but GAO did not identify a similar federal law for the parolee population. States have broad discretion in how to implement the requirement for registered sex offender notification. Therefore, the extent to which states' community notification laws apply to all registered sex offenders or explicitly include long-term care facilities varies. For example, four of the eight states GAO reviewed--California, Illinois, Minnesota, and Oklahoma--had laws that specified long-term care facilities as entities to be notified for at least some registered sex offenders who entered them. However, some facility administrators GAO contacted were uncertain as to whether they could share information with staff and others about residents who were known offenders in light of the Privacy Rule issued under the Health Insurance Portability and Accountability Act of 1996 (HIPAA). Long-term care facilities GAO contacted do not routinely impose different supervision or separation requirements on residents who are offenders based solely on their prior convictions. Instead, these facilities base such decisions on the demonstrated behaviors of residents. Even if long-term care facilities wanted to impose different supervision and separation requirements on offenders, their ability to do so is limited because they are not always aware of residents' prior convictions.
The Child Support Enforcement (CSE) program, established in 1975 under Title IV-D of the Social Security Act, established federal standards for state CSE programs to ensure that parents provide support to their children.Services provided through the CSE program include locating absent noncustodial parents, establishing paternity and support orders, and collecting and distributing child support payments. All 50 states, the District of Columbia, Guam, Puerto Rico, and the Virgin Islands operate CSE programs. However, because family law, which governs many aspects of child support, is generally under the purview of the state rather than the federal government, each of the 54 CSE programs is governed by some unique state laws and procedures. Although the states administer the child support program, the federal government plays a major role through OCSE within the Administration for Children and Families of the Department of Health and Human Services. This includes funding most of the program, establishing enforcement policies and guidance, providing technical assistance, and overseeing and monitoring state programs. As part of its oversight role, OCSE reviews state plans for each of the state programs. These plans describe the nature and scope of a state’s child support program and specify the procedures and policies adopted by each state to ensure that its program complies with all federal requirements. OCSE’s approval is a condition for federal funding of state programs. PRWORA strengthened the CSE program by requiring, among other things, that states (1) establish an integrated, automated network linking all states to information about the location and assets of parents, (2) increase the percentage of fathers identified, and (3) implement more enforcement techniques for collection of child support from noncustodial parents. Additionally, PRWORA changed federal welfare policy, eliminating eligible families’ legal entitlement to cash assistance and creating Temporary Assistance for Needy Families (TANF). TANF emphasizes the importance of work and personal responsibility rather than dependence on government benefits. After 2 years of assistance, or sooner if the state determines that the recipient is ready, TANF adults are generally required to be engaged in work or work-related activities. A lifetime limit of 60 months (or less, at the state’s option) is placed on adults’ receipt of cash benefits. Families receiving TANF benefits or benefits under the federally assisted foster care program or the Medicaid program automatically receive CSE services free of charge. Under PRWORA, TANF recipients generally must assign their rights to child support payments to the state. The CSE program provides services to anyone requesting them, regardless of income. In fiscal year 2000, the program managed more than 17 million cases, 35 percent of which included clients who never received assistance. Faced with growing caseloads in an environment of resource constraints and increasing federal requirements, some states contracted with private firms to provide some or all services. Generally, these firms are authorized to operate as agents of the state agencies and have access to most information usually available only to state agencies. Their employees are subject to the same penalties or other actions as state agency employees if they misuse the information. Unlike firms under contract with state agencies, other private firms are involved in collecting child support as independent business ventures. These firms contract with custodial parents and concentrate on locating absent noncustodial parents and collecting child support payments. These firms are the focus of this report. Data show that the amount of child support that was legally owed but unpaid almost doubled during the 4-year period from fiscal year 1996 to fiscal year 2000, even with increases in total collections. However, the amount owed is understated as a result of data limitations. The increase in the amount of child support owed could reflect, in part, a rise in the number of support orders established or adjustments in the amount owed on previously established support orders. Available data show that during the 4-year period from fiscal year 1996 to 2000, the amount of child support that was legally owed but unpaid grew from at least $45 billion in fiscal year 1996 to at least $89 billion in fiscal year 2000 (see table 1). This amount represents all support uncollected since the program was established in 1975. Although total state agency collections increased during this period from $12 billion to $18 billion and the total number of cases for which a collection was made increased by 83 percent, collections have been less than the amount that became due during the period. Also, collections, as a percentage of the amount due, dropped. In fiscal year 1996, collections represented 21 percent of the total amount due but dropped to 17 percent of the total due in fiscal year 2000. As a result, the amount owed at the end of the period is greater than the amount owed at the beginning of the period. OCSE data do not represent the total amount of child support owed because the data reflect only amounts associated with cases that are handled by, and distributed through, the state agencies. The data do not include cases in which child support is paid voluntarily through agreements between parents or in which custodial parents hire private attorneys or collection firms without involving the state agency. In addition, OCSE data do not include unpaid child support associated with closed cases. State agencies can close cases under certain circumstances after a support order has been established, even when child support is still owed. For example, state agencies can close a case if the noncustodial parent’s location is unknown and the state has made diligent efforts to locate the absent parent; or if the noncustodial parent cannot pay supportbecause the parent has been institutionalized in a psychiatric facility, is incarcerated with no chance for parole, or has a medically-verified total and permanent disability with no evidence of support potential. The increases in the amount of child support owed in spite of increased collections could be due, in part, to the rise in the number of support orders established or the rise in adjustments of individual support orders.From fiscal year 1996 to fiscal year 2000, the number of support orders established by OCSE increased by 9 percent, from 1.08 million to 1.17 million. Furthermore, provisions in PRWORA may lead to further increases in the number of support orders and the amount of child support owed. PRWORA requires that paternity be established for 90 percent of the state agency cases. OCSE reports that child support paternity was established for about 1.6 million children in fiscal year 2000, an increase of 46 percent over the 1.1 million paternities established in fiscal year 1996. Paternity must be established before child support orders can be issued. PRWORA also provided a simplified process for review and adjustment of all child support orders every 3 years. These reviews determine whether the amount of child support previously ordered is reasonable given the circumstances of both the noncustodial and the custodial parent. If these reviews result in more dollar increases than decreases in the amount owed, these reviews could further increase the future amount of child support owed. Thousands of private and public sector entities, including private firms and state agencies, can collect child support, and private firms differ among themselves and from state agencies. Specifically, two types of private sector entities—private firms and attorneys—and three types of public sector entities—state agencies, other public agencies, and court- appointed guardians—can collect child support (see fig. 1). The private firms differ among themselves with respect to such characteristics as location, client base, and years in business. Further, the private firms differ significantly from the state agencies in that private firms exercise greater discretion when accepting cases, have smaller caseloads, and charge higher fees for their services. Private attorneys make up the largest group of private entities. The Bureau of Labor Statistics estimates that there are about 500,000 lawyers employed nationally. Representatives of the American Bar Association told us that approximately 8,000 attorneys are members of the Family Law Division and that nearly every family law attorney has worked on a child support enforcement case at one time or another. They also said that although family law attorneys are the most likely to work on a child support enforcement case, other attorneys who do not specialize in family law, such as corporate attorneys, may also collect child support. In addition to attorneys, private collection firms, including the private firms that are the focus of this report, can also collect child support. A representative of the American Collectors Association, a trade organization of credit and collection professionals, told us he estimated that there are approximately 8,000 private collection firms operating in the United States and that about one-third of these firms have worked on a child support enforcement case at some time. Three kinds of public sector entities collect child support—state child support enforcement agencies (state agencies), other government agencies, and court-appointed guardians. There are 54 state agencies, one in each state, the District of Columbia, Puerto Rico, Guam, and the Virgin Islands. About 100 other government agencies can collect child support. According to state agency directors, seven states—Arizona, Florida, Kansas, Minnesota, Missouri, North Dakota, and Texas—have county- operated agencies collecting child support that are not part of the federal child support enforcement program. For example, in Florida, the Broward County Support Enforcement Division collects child support payments but accepts only non-TANF clients, cases in which both parents live in Florida and, more specifically, one parent must live in Broward County. In addition to state and other public agencies, court-appointed guardians can collect child support. These guardians can be individuals, government organizations, or nonprofit groups that are appointed by the court as a guardian of a minor and are entitled to collect child support from an absent parent. Private firms differed from one another in a number of respects, including location, client base, time in business, caseload size, and the proportion of business devoted to child support activities. The 38 private firms that we identified through various search efforts as collectors of child support are based in 16 states, as shown in figure 2. Texas had the highest number of firms—14, or 37 percent of the total number. In 15 other states, the number of firms ranged from as few as 1 to as many as 4. We did not identify any private firms based in the remaining 34 states. Responses from the 24 private firms that participated in our structured telephone interviews indicated that these private firms handled, in total, an estimated 30,000 cases. Most reported having clients from all states. However, 16 private firm officials told us that because of new state laws that restrict their operations, they would not accept clients who live in particular states. Examples of such restrictions include requiring private firms to obtain a license, requiring firms to be bonded, or limiting the percentage of fees that firms can charge. Table 2 summarizes four characteristics of the 24 firms that participated in our structured telephone interviews. Parents stated that they most often sought the services of private firms because the state agency had failed to collect their child support. We reviewed 138 randomly selected applications at one private firm and analyzed the answers to the question, “Why seek the services of a private firm?” Almost two-thirds of the applicants responded that they did so because the state or local child support enforcement agency was unable to obtain their child support. Other reasons were also cited by these applicants and are summarized in figure 3. Private firms, unlike state agencies, exercise discretion when accepting child support cases. A representative of one private firm that we visited explained that the criteria for accepting, refusing, or even closing cases are not fixed. Private firms consider the costs and resources required for a case before they accept it or continue to work on it. Another private firm official stated that “it is strictly a business decision” whether to accept or decline a case. Officials of the 24 firms that participated in our structured telephone interviews reported that their firms required a legally enforceable child support order before opening a case. Furthermore, all 24 private firms accepted cases in which the children were no longer minors and therefore considered emancipated. Federal and state laws largely mandate the kinds of cases that state agencies must accept and when cases can be closed. State agencies generally accept all clients whether or not a support order has been established and regardless of the amount of child support owed. However, half of the state agencies will not accept cases in which the children are emancipated. Furthermore, all state agencies accept current and former TANF recipients. (See table 3.) As a result of the differences in case acceptance criteria, private firm and state agency caseloads differed in characteristics such as size, average arrearage owed, and percentage of TANF clients. Responses from our structured telephone interviews indicated, as shown in table 4, that the median caseload for private firms is significantly lower than that for state agencies, while the average arrearage balance is significantly higher. Although most private firms we interviewed do not accept TANF clients, almost a fifth of the total state agency cases involved a TANF client. All of the private firms that we interviewed charged all of their clients a fee based on a percentage of the collections. Information obtained from our structured telephone interviews indicated that the average fee charged was 29 percent. Additionally, half of those firms charged clients an application fee, averaging $95, and about half charged clients other costs or fees, including attorney costs or fees for specific enforcement actions such as filing a lien against personal property. Generally, the private firms that we visited collect their fees by having the custodial parent change his or her address in the state agency system to direct all payments to the private firm. The firm then deducts its fees from the payments received and sends the remaining amount to the custodial parent. State agencies provide services to families receiving TANF, Medicaid, or foster care payments free of charge. Other families must apply for services, and state agencies must charge an application fee not to exceed $25. Eighteen state agencies absorb the application fee or charge up to $1. The other 36 states charge application fees, service fees, or both. Fifteen state agencies charge application fees ranging from $5 to $25, and 10 state agencies charge various service fees such as a $25 annual case maintenance fee or a $250 fee to establish a support order. Eleven state agencies charge various service fees as well as application fees. Figure 4 summarizes the types of fees charged by state agencies. Private firms and state agencies reported similar collection experiences, but their information sources and collection practices differed. Both private firms and state agencies reported collecting amounts from about 60 percent of their cases. While private firms reported that they relied heavily on information vendors to locate noncustodial parents and their assets, state agencies reported that they primarily relied on state and federal databases for the same information. The collection practices of private firms and state agencies also differed in that private firms reported relying on personal phone contacts with noncustodial parents and third parties, such as relatives, neighbors, and friends, whereas state agencies did not contact third parties for payment. Both private firms and state agencies that participated in our structured telephone interviews estimated that they collected amounts from about 60 percent of their cases, on average. The similarities in reported collection experiences may reflect a similarity in difficulty of cases in spite of differences in the characteristics of the cases handled by private firms and state agencies. For example, private firms reported twice the percentage of interstate cases that state agencies reported, and OCSE describes interstate cases as some of the most difficult to pursue. Private firms reported that, on average, 57 percent of their cases are interstate, while state agencies reported an average of 24 percent. On the other hand, state agencies reported having more cases in which the noncustodial parent had no income or assets than the private firms reported. Reasons cited most often by private firms and state agency officials for not being able to collect child support were the same: failure to locate the noncustodial parent, the noncustodial parent had no income or assets, or the noncustodial parent was incarcerated. However, as shown in figure 5, state agency officials cited these reasons more often than did officials of private firms. Twenty-two of the 24 private firms that participated in our structured telephone interviews used information vendors as their primary source for locating noncustodial parents and their assets. Information vendors are private businesses with extensive search capabilities enabling them to obtain large amounts of private information about individuals, such as addresses and telephone numbers, drivers’ license numbers, location of property and other assets, social security numbers, and information from court records. Information vendors sell this information to private firms or any other interested parties. Ninety-two percent of the private firms, compared with 35 percent of state agencies, reported that they used information vendors. State agencies relied heavily on automated interfaces with federal and state databases to locate absent parents and obtain asset information. A primary source of data for state agencies is the FPLS, an automated database containing information from state parent locator databases, employer reports of new hires, and the federal case registry of support orders. FPLS data include information such as individuals’ home addresses, asset information, social security numbers, and employers’ names and addresses. The FPLS system interfaces with a number of federal agencies including the Internal Revenue Service, Social Security Administration, and the Department of Defense. State agency systems also automatically request location information from other state agencies such as departments of human services, comptrollers for state taxes, motor vehicle departments, unemployment offices, law enforcement agencies, and phone and utility companies. Both private firms and state agencies called noncustodial parents to collect child support, but only private firms called third parties to collect child support. Thirty-seven state agencies that participated in our structured telephone interview said that they called the noncustodial parent to collect child support payments. Our review of the case file notes from the private firms that we visited showed that they repeatedly called noncustodial parents to collect child support payments. In many cases, the case file notes showed that these calls included reminding the noncustodial parent that they could go to jail if they did not pay what was owed. Private firms also called third parties, such as friends, relatives, and neighbors, to locate noncustodial parents and to persuade the third party to prevail upon the noncustodial parent to make payments. In at least two instances, a private firm that we visited was successful in persuading the mothers of the noncustodial parents to make the child support payments. In contrast, no state agency that we surveyed said that they encouraged noncustodial parents’ relatives to make payments. Generally, the same enforcement tools are available to private firms and state agencies, but depending on federal and state law, the processes that they must follow to use them often differ. Private firms must petition a government agency or the court to use many enforcement tools that some state agencies can implement independently through administrative processes. Our structured telephone interviews indicate that some enforcement tools were used more than others. One of the most widely used and effective enforcement tools, wage withholding, has been used improperly by private firms, in part because the form that OCSE developed for wage withholding is ambiguous and the related guidance makes including certain information optional, thereby inhibiting an employer’s ability to ensure that wage withholding has been properly ordered. A complex mix of federal, state, and local law governs access to enforcement tools by private firms and state agencies. Private firms must either petition the court or work with a state agency to access many of the enforcement tools. Private firms, unlike state agencies, cannot intercept federal tax refunds. Only the courts or a state agency may authorize wage withholding. As a result, when wage withholding has not been previously authorized, private firms must ask a state agency or the court to issue a wage withholding order. At one private firm we visited, we found that the firm had prepared a wage withholding order, provided it to a state agency, and the state agency then issued the order. On the other hand, when the court or a state agency has already authorized wage withholding, a private firm may send a notice of wage withholding directly to the employer. Figure 6 indicates how the 24 private firms that participated in our structured telephone interviews were able to use different enforcement tools. State agency access to the enforcement tools depends on the tool and the state. Some state agencies have administrative authority to use some of the enforcement tools without petitioning the court. For example, state agencies in New York and South Carolina can administratively place liens on property, while state agencies in North Carolina and Maryland must petition the courts to take this action. However, the state agency in Illinois may administratively place some liens but must petition the court to place liens on real estate. State agencies in Idaho and Wisconsin have administrative authority to seize property, whereas the agencies in Michigan and Wyoming must petition the court. Both private firms and state agency officials indicated that the tool they used most frequently was wage withholding. In our structured telephone interviews, private firm officials said that they most frequently used, when applicable, (1) wage withholding, (2) liens on real estate or other assets, and (3) credit bureau reporting. State agency officials indicated that they most frequently used (1) wage withholding, (2) federal tax refund intercept, and (3) credit bureau reporting. Wage withholding is a procedure by which an employer automatically deducts amounts from an employee’s wages or income to pay a debt or a child support obligation. OCSE considers it the most effective enforcement tool for collecting child support, reporting that it is responsible for approximately 62 percent of successful collections. The process for withholding wages differs among the states, depending on the law of the particular state. However, in all states, an approved “tribunal” must authorize wage withholding. Private firms cannot issue wage withholding orders or otherwise authorize wage withholding. They can request that the appropriate tribunal authorize wage withholding, or they can notify an employer, in a specific case, that wage withholding has been authorized. All states have an administrative process whereby state agencies can issue orders to withhold child support payments from a noncustodial parent’s paycheck without going through the courts. Wage withholding for child support may be authorized by one of three documents: (1) divorce decree, (2) child support order, or (3) wage withholding order. Thus, there may be circumstances where a separate and specific “wage withholding order” must be issued, because wage withholding has not been authorized in a divorce decree or child support order. Before an employer can begin withholding wages from the noncustodial parent’s pay for child support, the employer must receive either an authorized wage withholding order or a notice that wage withholding has been authorized. There is no distinction between how an employer must respond to a wage withholding order or a notice. Upon receipt of an order or notice, if it appears to be valid, an employer is required by law to provide a copy to the employee and begin withholding child support from the employee’s wages. If an employer fails to withhold income as the order or notice directs, the employer is liable both for the accumulated amount that should have been withheld from the employee’s income and for any other penalties set by state law (see app. I, item e). Furthermore, the law protects an employer from civil liability to an individual or agency if the form is in error. The employee may contest the validity of the wage withholding or the amount withheld as a result of a mistake of fact. As required by law, OCSE developed a standard form (OMB 0970-0154) that everyone must use and issued guidance for wage withholding. As the form’s title indicates, “Order/Notice to Withhold Child Support” (see app. I, item a), the form is used both as an order and as a notice, which makes it difficult for employers to tell whether the form was sent by a state agency or a private firm. Moreover, OCSE’s guidance makes it difficult for employers to ensure the validity of a wage withholding notice when sent by a private firm. Because private firms cannot authorize wage withholding, when employers receive a notice from a private firm without the underlying legal support, they do not know if wage withholding has been authorized by an appropriate authority. While the form provides a space for the sender to provide information about the underlying order and the issuing state (see app. I, items b and c), the guidance does not require the sender to provide the date of the underlying order or a copy. In fact, OCSE guidance states that the employer may not request a copy of the underlying order. OCSE officials explained that this prohibition was intended to reduce the burden placed on state agencies that issue several thousand wage withholding orders per year. Furthermore, the guidance does not specify who should sign the form as the authorizing official, although the form includes a place to indicate the name, title, and signature of the authorizing official (see app. I, item d). We found that an official at one private firm was signing forms as the authorizing official. Finally, the form provides space for contact information in case the employer has any doubts about the validity of the order or notice (see app. I, item f); however, we found that on forms sent by private firms, frequently the contact named is an employee of the firm and not an authorizing official who would be in a better position to verify the validity of the notice. Because of the difficulty of determining the validity of forms sent by private firms and the requirement that an employer begin withholding wages upon receipt of an order or notice, we found instances in which employers improperly withheld wages from a noncustodial parent’s paycheck on the basis of information from a private firm. In one case, the employer was properly withholding about $900 per month on the basis of a court order issued in October 2000. In March 2001, when the employer received a wage withholding notice from a private firm indicating that about $550 per month should be deducted from the employee’s income, the employer began withholding that amount as well. The employee’s attorney determined that the March 2001 wage withholding notice was based on a temporary order that expired in April 1999. On the basis of this information, the employer stopped withholding the amount specified in the wage withholding notice. By that time, however, more than $2,000 had been improperly withheld from the employee’s wages. The employer eventually reimbursed the employee for the amount inappropriately withheld. In another case, a state agency was asked to investigate whether an employer, on the basis of a notice from a private firm, was improperly withholding wages. The state agency researched the matter but could not determine the basis for the wage withholding notice. Additionally, the state agency determined that the noncustodial parent did not owe any current child support and that any past-due support owed would have been minimal. As a result of the review, the private firm terminated the wage withholding notice. Most state agencies provided payment history information requested by private firms, but few provided confidential information on the location of noncustodial parents or their assets. Whereas most state agencies provided payment history information, which nearly all private firms requested, officials from 12 state agencies told us that they never shared payment history information with private firms. Few state agencies provided private firms confidential information from the FPLS. State agencies that did not provide this information, as well as state agencies that did, cited federal law as the basis for their decision. This inconsistency is due, in part, to the ambiguity in the law as it applies to private agencies and to lack of specificity in federal regulation of, and guidance on, private firms’ access to this information. Twenty-two of 24 private firms that participated in our structured telephone interviews told us that they requested payment history information from state agencies to verify the amount of child support owed. Thirty-six of 54 state agency officials told us that they provided payment history information to private firms 11 said that they always provided it upon request and 25 said that they sometimes provided it. However, 12 state agency officials said that they never provided this information to private firms. Of the 25 that said they sometimes provided it, 22 said that they provided it only with consent from the custodial parent. The question was not relevant for the remaining 6 state agencies because either the agency was not the one that was responsible for maintaining payment history information or the agency had never received a request for the information from a private firm. Few private firms reported requesting information on the location and assets of noncustodial parents, but when this information was requested, most states did not provide it. Such information is available through the FPLS, a federal database containing personal information on individuals nationwide. All state agencies have access to data on individuals in the FPLS, whether or not the individuals are residents of the agency’s state. Two-thirds of the private firms that participated in our structured telephone interviews told us that within the last year, they had not requested information from state agencies regarding the location or assets of noncustodial parents. The reasons that they cited most often for not requesting this information were that (1) state agencies will not provide the information, (2) there are better information sources, or (3) the information is not timely. Officials at the private firms that we visited gave similar reasons for not requesting location information from state agencies. For example, one private firm official told us that he did not ask for this information because the information was old and because it was unlikely that the state would have information not available from the other sources that he used. Furthermore, he stated that “for noncustodial parents who really do not want to be found, the National Directory of New Hires (NDNH), a key part of the FPLS, will not help because these parents change jobs frequently, are self-employed, or work .” Another private firm official stated that the NDNH was not useful because of (1) the transience of many noncustodial parents, (2) better ways of getting employment information, and (3) the high number of self-employed noncustodial parents that the database does not capture. Additionally, the official stated that he did not use FPLS data even for cases that he handled under a contract with a state agency, which gives him full access to FPLS data. Officials of the firm agreed that private information vendors provided more accurate information more quickly and more efficiently. When asked which state information sources would be helpful, another firm official responded that apart from the quicker access to drivers’ records, private information vendors provided information more quickly than state sources, although the amount of information is limited. One state agency official who participated in our structured telephone interviews said that the state agency provided location information to private firms. Four state agency officials stated that sometimes they provided location information, but 45 state agency officials told us that they never did. Because state agencies can access the FPLS, state agencies can obtain data on individuals nationwide. A state agency that provides data to private firms can provide information on individuals residing in other states, including information that originated in another state. In our review of case files from private firms, we found instances where location data obtained from the FPLS were provided when the custodial parent and children lived in state A, the noncustodial parent lived in state B, and state C provided the data. Figure 7 summarizes the number of states that have provided payment history and location information to private firms. State agencies’ practices regarding the sharing of FPLS data with private firms were affected by differences in interpretation of whether federal law permits or requires state agencies to share FPLS data, the absence of guidance from OCSE, and state agency officials’ concerns about whether private firms would protect confidential data. To prevent disclosure of personal information to unauthorized persons or for unauthorized purposes, the law strictly limits access to, and use of, FPLS data. The state official who provided FPLS data stated that they were required by federal law to provide FPLS data, whereas some who did not provide such information said that federal law prevented them from releasing the data. Determining whether or not state agencies would be permitted or required to provide private firms access to FPLS data rests on the extent to which private firms are considered authorized persons under the pertinent provisions of the Social Security Act. The act defines an authorized person to include “the resident parent, legal guardian, attorney, or agent of a child. . . . as determined by regulations prescribed by the Secretary [of Health and Human Services].” Furthermore, it mandates that the FPLS shall, among other things, transmit to an authorized person information on the location of an individual who owes child support, including the individual’s social security number and address. Additionally, the FPLS must transmit information on an individual’s employer, wages, and assets. OCSE officials from the office of policy stated that current regulations and guidance do not explicitly address whether private firms have access to this data. They also stated that they were studying the issue and planned to issue clarifying guidance and that in the absence of OCSE guidance, it is up to each state agency to decide whether or not to provide FPLS data to private firms. Furthermore, state agency officials who refused to provide FPLS data to private firms stated that they were concerned about protecting the data. They said that they were not comfortable with sharing such confidential information with private firms and feared that the private firms might misuse the data. Private firms use many enforcement tools and information sources to collect child support. While OCSE considers wage withholding to be the most effective enforcement tool, the wage withholding form and the related guidance make it difficult for employers to determine the validity of wage withholding notices that they receive from private firms. As a result, noncustodial parents’ wages have been improperly withheld. In addition, some private firms are requesting and receiving confidential FPLS data. It is not clear, however, whether these firms are authorized to receive the data. A determination by OCSE would ensure that all firms and their clients were treated the same. Given the growth in the amount of child support owed, it is possible that more private firms will enter the business or that those in the business will acquire more clients. Therefore, it is important to clarify as soon as possible the areas in which there is ambiguity. Without guidance that takes into account the role of private firms in collecting child support and that clearly addresses issues relevant to them, private firms, state agencies, and third parties may take inappropriate actions in their efforts to collect child support. To improve the wage withholding process, we recommend that the secretary of HHS direct the commissioner of OCSE to make changes to the wage withholding guidance and form. Specifically, OCSE should modify the guidance to (1) require that all parties, except state agencies, send a copy of the wage withholding order or other document authorizing wage withholding when sending a notice to employers, (2) allow employers to request the document(s) authorizing wage withholding when forms are not sent by state agencies, and (3) specify who should sign the form as the authorizing official. Additionally, OCSE should revise the form to clearly distinguish when the form is being sent by a state agency from when it is being sent as a notice by private firms or others. To ensure consistent and fair treatment of private firms and their clients, we recommend that the secretary of HHS direct the commissioner of OCSE to determine whether private firms should have access to FPLS data and issue explicit guidance addressing this issue. We received written comments on a draft of this report from the Department of Health and Human Services. These comments are reprinted in appendix III. The department generally agreed with our findings and said that it plans to address our recommendations. Specifically, the department plans to clarify the income withholding form and instructions and address through regulation, or other appropriate means, whether private firms have access, through state agencies, to certain data in the FPLS. The department agreed with our finding that OCSE’s data understates the amount of child support owed, but the department was concerned that the reader may attribute this finding to OCSE negligence and said that it would be better if we reported that OCSE data do not represent all child support owed; we did this. The department also stated that it is misleading for GAO to focus on unpaid child support accumulated since the program was established 27 years ago. In addition, the department stated that, for a number of reasons, some of the accumulated child support can never be collected. We noted in the body of the report that the total child support owed includes amounts unpaid since the inception of the program. We did not change the report to address the statement concerning the large amounts of child support that can never be collected, because the report cites the reasons that private firm and state agency officials gave us for not being able to collect some child support. Additionally, the department stated that we partially identified the reasons for the continued increase in uncollected child support. The department noted that other reasons include interest on unpaid child support, more accurate reporting, and child support awards that low-income fathers are unable to pay. We noted in the report that the amount of unpaid child support includes interest added by some states. We did not change the report to address the statement about data accuracy, because we did not determine whether the reliability of OCSE’s data has improved. Furthermore, we did not change the report to address whether amounts have been awarded that low-income fathers are unable to pay, because we did not examine this issue. However, we reported that the lack of income or assets by noncustodial parents was a primary reason cited by private firm and state agency officials for being unable to collect child support. The department provided technical comments, which have been incorporated in the report as appropriate. As agreed with your office, we will make no further distribution of this report until 30 days after its issue date, unless you publicly release the contents earlier. At that time, we will send copies of this report to appropriate congressional committees, the secretary of HHS, and other interested parties. We will make copies available to others upon request. The report will also be available on GAO’s home page at www.gao.gov. If you or your staff have questions concerning this report, please call me on 202-512-8403. Key contributors are listed in appendix IV. Employer/Withholder's Federal EIN Number (if known) Employee’s/Obligor’s Name (Last, First, MI) Employee’s/Obligor’s Social Security Number Employee’s/Obligor’s Case Identifier Obligee Name (Last, First, MI) If checked, you are required to enroll the child(ren) identified above in any health insurance coverage available to the employee’s/obligor’s through his/her employment. ORDER INFORMATION: This Order/Notice is based on the support order from You are required by law to deduct these amounts from the employee’s/obligor’s income until further notice. $ $ yesno $ $ $ $ for a total of $to be forwarded to the payee below.per You do not have to vary your pay cycle to be in compliance with the support order. If your pay cycle does not match the ordered payment cycle, withhold one of the following amounts: $ $per biweekly pay period (every two weeks).$per monthly pay period. . P P P P P P er er er er er er current child support past-due child support - Arrears 12 weeks or greater? current medical support past-due medical support spousal support other (specify) per weekly pay period. per semimonthly pay period (twice a month). If remitting payment by EFT/EDI, call Bank routing code: before first submission. Use this FIPS code: : . Make check payable to: Print Name and Title Of Authorized Official(s) IMPORTANT: The person completing this form is advised that the information on this form may be shared with the obligor. If checked, you are required to provide a copy of this form to your employee. If your employee works in a state that is different from the state that issued this order, a copy must be provided to your employee even if the box is not checked. We appreciate the voluntary compliance of Federally recognized Indian tribes, tribally-owned businesses, and Indian- owned businesses located on a reservation that choose to withhold in accordance with this notice. Priority: Withholding under this Order/Notice has priority over any other legal process under State law against the same income. Federal tax levies in effect before receipt of this order have priority.If there are Federal tax levies in effect, please contact the State Child Support Enforcement Agency or party listed in number 12 below. Combining Payments: You can combine withheld amounts from more than one employee’s/obligor's income in a single payment to each agency/party requesting withholding. You must, however, separately identify the portion of the single payment that is attributable to each employee/obligor. Reporting the Paydate/Date of Withholding: You must report the paydate/date of withholding when sending the payment. The paydate/date of withholding is the date on which the amount was withheld from the employee's wages. You must comply with the law of the state of employee's/obligor's principal place of employment with respect to the time periods within which you must implement the withholding order and forward the support payments. Employee/Obligor with Multiple Support Withholdings: If there is more than one Order/Notice to Withhold Income for Child Support against this employee/obligor and you are unable to honor all support Order/Notices due to Federal or State withholding limits, you must follow the law of the state of employee's/obligor's principal place of employment. You must honor all Order/Notices to the greatest extent possible. (See #10 below.) Termination Notification: You must promptly notify the Child Support Enforcement Agency or payee when the employee/obligor no longer works for you. Please provide the information requested and return a complete copy of this order/notice to the Child Support Enforcement Agency or payee. EMPLOYEE'S/OBLIGOR'S NAME:CASE IDENTIFIER: DATE OF SEPARATION FROM EMPLOYMENT: LAST KNOWN HOME ADDRESS: NEW EMPLOYER/ADDRESS: Lump Sum Payments: You may be required to report and withhold from lump sum payments such as bonuses, commissions, or severance pay.If you have any questions about lump sum payments, contact the person or authority below. Liability: If you have any doubts about the validity of the Order/Notice, contact the agency or person listed below. you fail to withhold income as the Order/Notice directs, you are liable for both the accumulated amount you should have withheld from the employee’s/obligor's income and any other penalties set by State law. Anti-discrimination: You are subject to a fine determined under State law for discharging an employee/obligor from employment, refusing to employ, or taking disciplinary action against any employee/obligor because of a child support withholding. changes in names by child support collection firms. We identified about 60 private child support collection firms during this 7-month period, located in about 20 states in all regions of the country. We followed up with telephone calls to the firms. Some firms provided a telephone contact on their Web site. For those that did not, we obtained the telephone number from experts, from other firms, or through company search information on the Internet. Some of the firms that advertised on the Internet told us when we called them that they had never collected, or were no longer collecting, child support. In addition to finding firms through Internet searches, we also found some firms through lists provided by knowledgeable people and through telephone listings. Experts, advocates for noncustodial parents and custodial parents, and industry representatives informed us that many private firms operated for very limited periods of time or changed company names or structure. In fact, we identified several companies that had had at least one name change or structural change. As firms have increased their use of the Internet, some have used their Internet address as a business name for some or all of their business. Interviewees provided the names of eight companies that could not be verified either on the Internet or in telephone listings. We did not include these firms, assuming that they were no longer in business. In September, we could not verify, either through an Internet search or by telephone, the existence of two companies that had appeared in earlier lists. To develop information about the characteristics, collection experiences, information sources, collection practices, and enforcement tools of state agencies and of private child support collection firms, we used two separate, though similar, structured interview guides. We used information gathered during site visits to develop the interview guides and then used the guides to conduct structured telephone interviews with each of the state agencies and most of the private firms on our lists. Both at the state agencies and at the private firms, we discussed our topics with the head of the agency or another official designated to speak for the head of the agency. Because state agencies are larger, more complex organizations than private firms, we transmitted a copy of the questions in advance to state agencies, as our pretesting had shown that this practice greatly facilitated state officials’ ability to respond to the questions. At the time of our structured telephone interviews, we were able to confirm the existence of 38 firms that engaged in child support collection. We attempted to conduct structured telephone interviews with these firms. We either talked with firm employees or left messages explaining our work and asking the firms to participate in our study. Twenty-four of the private child support collection firms (63 percent) responded to our requests for interviews. We also used the two structured telephone interviews to develop information about whether state agencies provided information requested by private child support collection firms. After gathering and analyzing the data obtained from our structured telephone interviews and visits, we compared the caseload characteristics, collection experiences, collection practices, and information sources of private firms with those of state agencies. Because of the vast differences in the characteristics of their cases, however, we did not compare the average time that it took for private firms and state agencies to collect child support. Further, we could not determine whether greater access to information and enforcement tools would increase private firms’ collections or improve their effectiveness because there are many factors involved in child support cases, and these factors can vary with each case. In addition, we visited four private firms and two state agencies, where we interviewed managers, reviewed operating policies and practices, and obtained case file data. These firms were included in the structured telephone interviews as well. We randomly selected cases to review from among all those that were begun during calendar year 2000, reasoning that this would allow enough time for activity in the cases by the time of our review in July and August 2001. We attempted to review 30 cases in each location, assuming that this would be sufficient to allow us to understand the basic collection processes and information sources; however, these samples were not sufficient to project our findings to the agencies’ caseloads. These site visits provided additional information about agency characteristics, collection experiences, information sources, collection practices, and enforcement tools. Our choices of states to visit were based on location and overall child support collections. We chose states that were among the top ten in child support collections. We visited Texas because a disproportionate number of private child support collection firms (14) are located there. We visited Ohio because it is in a different region from Texas and it is also one of the top states in total collections. In each of the places we visited, the state agencies and private firms cooperated fully with our research efforts, making staff available for interviews and allowing us to review case files. In addition to those named above, the following individuals made important contributions to this report: Rebecca A. Ackley,. Barbara W. Alsip, Richard P. Burkard, Kopp F. Michelotti, James M. Rebbe, N. Kim Scotten, John G. Smale, Jr., and James P. Wright.
To increase child support collections, Congress has considered proposals to improve the ability of private firms to gather information to help locate noncustodial parents and enforce the payment of child support. At the end of fiscal year 2000, the Office of Child Support Enforcement (OCSE) indicated that $89 billion in child support was owed but unpaid--a 96-percent increase since the end of fiscal year 1996. GAO believes that this amount is understated. Thousands of private and public sector entities can collect child support. Both private firms and state agencies reported collections from about 60 percent of their cases. Twenty-two of the 24 private firms GAO reviewed reported that they relied on private information vendors--commercial firms that sell information such as addresses, telephone numbers, and social security numbers--as their primary information source, whereas about one-third of state agencies reported using this source. State agencies relied heavily on state and federal automated databases to locate noncustodial parents and their assets. Additionally, private firms and the state agencies reported calling noncustodial parents to collect child support. However, only the private firms called third parties, such as relatives and neighbors of noncustodial parents to persuade them to prevail upon the noncustodial parent to make payments. The same enforcement tools are available to private firms and state agencies, but the process that they follow in using these tools often differ. Private firms, however, do not have access to federal tax refunds. Officials from both private firms and state agencies reported that the tool they most often use was wage withholding. However, the form and related guidance developed by OCSE for use in wage withholding make it difficult for employers to determine whether it is proper to begin withholding wages. Most of the state agencies had not provided information on noncustodial parents' location or assets from the Federal Parent Locator Service (FPLS). Practices for sharing with private firms were affected by differences in interpretation of whether federal law permits or requires state agencies to share FPLS data.
DOD defines its logistics mission, including supply chain management, as supporting the projection and sustainment of a ready, capable force through globally responsive, operationally precise, and cost-effective joint logistics support for America’s warfighters. Supply chain management is the operation of a continuous and comprehensive logistics process, from initial customer order for materiel or services to the ultimate satisfaction of the customer’s requirements. It is DOD’s goal to have an effective and efficient supply chain, and the department’s current improvement efforts are aimed at improving supply chain processes, synchronizing the supply chain from end to end, and adopting challenging but achievable standards for each element of the supply chain. Many organizations within DOD have important roles and responsibilities for supply chain management, and these responsibilities are spread across multiple components with separate funding and management of logistics resources and systems. The Office of the Under Secretary of Defense for Acquisition, Technology and Logistics serves as the principal staff assistant and advisor to the Secretary of Defense for all matters relating to defense logistics, among other duties. The Secretary of Defense also designated the Under Secretary of Defense for Acquisition, Technology and Logistics as the department’s Defense Logistics Executive with overall responsibility for improving and maintaining the defense logistics and supply chain system. The Assistant Secretary of Defense for Logistics and Materiel Readiness, under the authority, direction, and control of the Under Secretary of Defense for Acquisition, Technology and Logistics, serves as the principal logistics official within the senior management of the department. Within the Office of the Assistant Secretary for Logistics and Materiel Readiness, the Deputy Assistant Secretary of Defense for Supply Chain Integration improves the integration of the DOD supply chain through policy development and facilitates component implementation of supply chain management practices, among other duties. Each of the military departments is separately organized under its own Secretary. Subject to the authority, direction, and control of the Secretary of Defense, the Secretaries of the military departments are responsible for, among other things, organizing, training, and equipping their forces. Additionally, according to a DOD directive, each military department Secretary is responsible for preparing and submitting budgets for their respective department, justifying approved budget requests before Congress, and administering the funds made available for maintaining, equipping, and training their forces. Another important organization in supply chain management is DLA, which purchases and provides nearly all of the consumable items needed by the military, including a majority of the spare parts needed to maintain weapon systems and other equipment. During joint military operations, J-4 is the principal joint staff organization responsible for integrating logistics planning and execution in support of joint operations. In carrying out this responsibility, the J-4 relies on various DOD components, including the military services, DLA, and TRANSCOM, to provide the logistics resources and systems needed to support U.S. forces. Specifically, DOD’s doctrine governing logistics in joint operations states that DLA and the military services share responsibilities as the suppliers of equipment and supplies to the joint force needed for sustained logistic readiness. It further states that as the suppliers, they are responsible for delivering the right forces and materiel, at the right place and time, to give the components of the joint force what they require, when they need it. TRANSCOM, in addition to its responsibilities for transporting equipment and supplies in support of military operations, is designated as the distribution process owner for DOD. The role of the distribution process owner is to, among other things, oversee the overall effectiveness, efficiency, and alignment of departmentwide distribution activities, including force projection, sustainment, and redeployment/retrograde operations. DOD also has two senior-level governance bodies for logistics and supply chain management—the Joint Logistics Board and the Supply Chain Executive Steering Committee. The Joint Logistics Board reviews the status of the logistics portfolio and the effectiveness of the defensewide logistics chain in providing support to the warfighter. The Joint Logistics Board is co-chaired by the Assistant Secretary of Defense for Logistics and Materiel Readiness and the Joint Staff Director of Logistics, and has senior-level participants from the military services, combatant commands, and DLA. DOD officials stated that the Supply Chain Executive Steering Committee is another important executive-level governance body for oversight of improvement efforts. The Executive Steering Committee is chaired by the Deputy Assistant Secretary of Defense for Supply Chain Integration and has participants from many of the same DOD organizations as the Joint Logistics Board. The department’s Chief Management Officer (CMO) and Deputy CMO are senior-level officials with broad oversight responsibilities across defense business operations, which include supply chain management. They have responsibilities related to the improvement of the efficiency and effectiveness of these business operations. For example, they oversee the development and implementation of DOD’s Strategic Management Plan, which includes supply chain management and other business operations areas such as business system modernization and financial management. DOD maintains military forces with unparalleled combat and support capabilities; however, it also continues to confront long-standing management problems related to its business operations that support these forces. These business operations include—in addition to supply chain management—financial management, business system modernization, and overall defense business transformation, among others. We have identified DOD supply chain management as a high-risk area due to weaknesses both in the management of supply inventories and responsiveness to warfighter requirements. Inventory management problems have included (1) high levels of inventory beyond that needed to support current requirements and future demands and (2) ineffective and inefficient inventory management practices. In addition, we have reported on shortages of critical items and other supply support problems during the early operations in Iraq, as well as on the numerous logistics challenges that DOD faces in supporting forces in Afghanistan. We initiated our high-risk list and biennial updates to focus attention on government operations that we identified as being at high risk due to their greater vulnerabilities to fraud, waste, abuse, and mismanagement, as well as areas that have a need for broad-based transformations to address major economic, efficiency, or effectiveness challenges. The high-risk list serves to identify serious weaknesses in areas involving substantial resources and provide critical services to the public. Solutions to high-risk problems offer the potential to save billions of dollars, improve service to the public, and strengthen the performance and accountability of the U.S. government. Removal of a high-risk designation may be considered when legislative and agency actions result in significant and sustainable progress toward resolving a high-risk problem. Over time, we have removed the high-risk designations of 21 programs or operations. When we review an agency’s actions taken to address high-risk challenges, we assess the actions against five criteria: (1) a demonstrated strong commitment to and top leadership support for addressing problems, (2) the capacity to address problems, (3) a corrective action plan that provides for substantially completing corrective measures in the near term, (4) a program to monitor and independently validate the effectiveness and sustainability of corrective measures, and (5) demonstrated progress in implementing corrective measures. With respect to supply chain management, we found in our most recent update of the high-risk series that DOD generally met the first two criteria. That is, DOD demonstrated top leadership support for addressing its supply chain management weaknesses, and it has the people and resources necessary to do so. We found that DOD partially met the other three criteria. On the basis of our prior work, we have recommended that DOD develop an integrated, comprehensive plan for improving logistics, to include supply chain management. Our prior work has shown that strategic planning is the foundation for defining what an agency seeks to accomplish, identifying the strategies it will use to achieve desired results, determining how well it succeeds in reaching results-oriented goals, and achieving objectives. Combined with effective leadership, strategic planning provides decision makers with a framework to guide program efforts and the means to determine if these efforts are achieving the desired results. Characteristics of an effective strategic plan should include a comprehensive mission statement; problem definition, scope, and methodology; goals and objectives; activities, milestones, and performance measures; resources and investments; organizational roles, responsibilities, and coordination; and key external factors that could affect the achievement of goals. Over the last several years, DOD has issued a series of strategic planning documents for logistics and supply chain management. For example, DOD issued the first iteration of its Supply Chain Management Improvement Plan in 2005 to address some of the systemic weaknesses highlighted in our reports. DOD subsequently updated that plan on a periodic basis. Also in 2005, DOD produced its Focused Logistics Roadmap, which catalogued current efforts and initiatives. In 2008, DOD released its Logistics Roadmap with the intent of providing a more coherent and authoritative framework for logistics improvement efforts, including supply chain management. While these plans have differed in scope and focus, they have typically included a number of high-level goals and related initiatives addressing aspects of supply chain management. These prior plans represented positive steps toward resolving weaknesses in supply chain management. However, our reviews of the plans found that they fell short of providing an integrated, comprehensive strategy for improving logistics, including supply chain management. The plans, for example, had some deficiencies that reduced their usefulness for guiding and overseeing improvements. Among other things, the plans did not identify the scope of logistics problems or the capability gaps they sought to address, provide a basis for determining funding priorities among various initiatives, or clearly link to logistics decision-making processes. Most recently, DOD issued its 2010 Logistics Strategic Plan and indicated a commitment to update this plan annually. The plan, which supersedes the prior plans issued by the department, identifies four overarching logistics goals, including one goal that specifically addresses supply chain management. We testified on this plan in July 2010 and identified some of the same deficiencies found in previous plans. DOD has developed and begun to implement a corrective action plan for requirements forecasting, one of the three major focus areas we identified as needing improvement in supply chain management. Specifically, DOD’s Comprehensive Inventory Management Improvement Plan, issued in October 2010 in response to a statutory mandate, includes developing more accurate demand forecasting as a key improvement effort for the department. On the basis of our analysis, we believe this document can serve as a corrective action plan for the requirements forecasting focus area. Corrective action plans are critical to resolving weaknesses in high-risk areas. Such plans should (1) define root causes of problems, (2) identify effective solutions, and (3) provide for substantially completing corrective measures in the near-term, including steps necessary to implement solutions. DOD’s inventory management improvement plan is aimed at reducing excess inventory and contains nine individual sub-plans that address a range of inventory management problems. One sub-plan focuses on improving demand forecasting accuracy and the setting of inventory levels across the department. We have previously reported that the mismatch between inventory levels and requirements is due largely to inaccurate demand forecasts, and DOD acknowledged in its 2010 Logistics Strategic Plan that inaccurate requirements forecasting continues to be a weakness within its supply chain. The Comprehensive Inventory Management Improvement Plan addresses all three of the general elements of a corrective action plan (see table 1). The plan defines the root causes of problems in demand forecasting, identifies solutions to improve its demand forecasting processes and procedures, and provides steps to achieve these solutions. As we noted earlier, effective strategic planning guides program improvement efforts and provides the means to determine if these efforts are achieving the desired results. Characteristics of effective strategic planning include a comprehensive mission statement; problem definition, scope, and methodology; goals and objectives; activities, milestones, and performance measures; resources and investments; organizational roles, responsibilities, and coordination; and key external factors that could affect the achievement of goals. We reported in January 2011 that DOD’s Comprehensive Inventory Management Improvement Plan addresses or partially addresses all of these characteristics and that it represents an important step for DOD in its efforts to improve its inventory management practices. Further, the plan contains an appendix that details how other DOD strategies, plans, or efforts relate to its various sub-plans. Additionally, it describes the process that will be used to implement the plan and monitor progress against performance targets. While this inventory management improvement plan contains both the elements of a corrective action plan and characteristics of effective strategic planning, effective implementation will be critical for achieving expected outcomes. Implementation will be challenged by several issues, such as aggressive time lines and benchmarks and implementation of certain automated business systems. DOD has not developed corrective action plans for two other supply chain management focus areas: asset visibility and materiel distribution. DOD has plans that address aspects of these two focus areas, but officials could not identify plans for either area that address key problems and solutions in a comprehensive, integrated manner. Challenges within these two focus areas are often interrelated and result in impacts on warfighter support. For example, difficulties or inaccuracies in the visibility over assets can cause delays in the distribution of supplies to the warfighter. Until the department develops and implements corrective action plans for these remaining two focus areas, DOD may have difficulty resolving long- standing weaknesses in supply chain management. Recent reviews and audits have pointed to continuing problems with asset visibility and materiel distribution that have affected supply support to the warfighter. In a recent internal DOD review of joint supply issues in theater, the department acknowledged it had insufficient visibility of assets in theater, which can result in potential inventory sources being overlooked due to lack of visibility or service ownership, as well as limited visibility of assets while in-transit. In addition, a December 2010 Army Audit Agency report found that despite having policies and procedures in place for identifying, inspecting, and repairing containers, personnel in Iraq sometimes did not comply with the policies and failed to correctly inspect the condition of containers or update this information in computer systems. As a result, the Army did not have an accurate accounting of containers that were in good condition for supporting the ongoing drawdown in Iraq and meeting time frames for that withdrawal. In a prior review of supply support in Afghanistan, we reported that DOD has been challenged by several materiel distribution issues, such as the transportation of cargo through neighboring countries and around Afghanistan, limited airfield infrastructure, limited storage capacity at logistics hubs, and difficulties in synchronizing the arrival of units and equipment. DOD had undertaken some efforts to mitigate these challenges, such as the expanding cargo areas at some distribution hubs. Later this year we will report on the extent to which DOD continues to experience challenges with asset visibility and materiel distribution in Afghanistan. The 2010 Logistics Strategic Plan indicates that improving asset visibility and materiel distribution remain priorities for the department; however, the plan does not, by itself, constitute a corrective action plan to resolve supply chain management weaknesses because it lacks detailed information needed to guide and oversee improvement efforts. Regarding asset visibility, the Logistics Strategic Plan indicates that two priorities for the department are implementing a global container management policy and implementing radio frequency identification. Similarly, the Logistics Strategic Plan includes improvement initiatives for materiel distribution. However, the plan does not discuss the root causes for either asset visibility or materiel distribution weaknesses, identify the extent to which the weaknesses are present, detail steps for implementing improvement initiatives and thus achieving solutions, or contain information (such as milestones, performance information, benchmarks, and targets) necessary to gauge the department’s progress in implementing these initiatives and achieving outcomes. DOD has not developed corrective actions plans for asset visibility and materiel distribution because senior-level officials considered prior strategic plans and initiatives sufficient to address high-risk challenges in these areas. However, there is some indication that DOD may place more emphasis on developing more comprehensive, integrated plans in the future. In our review of DOD’s Comprehensive Inventory Management Plan, we noted that officials from the Office of the Secretary of Defense and DOD components provided considerable management focus and coordination across stakeholder organizations to develop that plan. In addition, during the course of our current review, a senior DOD logistics official stated that the department began an effort in January 2011 to more comprehensively review the current state of asset visibility and to develop a plan to guide future improvements in this focus area. This official expects that the asset visibility plan would be developed with the same collaborative approach as was used in the development of the inventory management plan and that the two plans would be similar in their degree of detail. Further, the senior official stated that there were ongoing initiatives in the department that could provide the foundation for a similar plan for addressing weaknesses in materiel distribution, but that such an effort had progressed less than the one for asset visibility in terms of developing a plan for guiding improvements. Recent actions by the Secretary of Defense indicate that the department intends to take additional steps aimed at achieving cost efficiencies in these two focus areas. In a March 14, 2010 memorandum, the Secretary outlined the steps that DOD plans to take to reduce inefficiencies and eliminate duplication with respect to in-transit asset visibility. The memorandum required TRANSCOM to prepare an implementation plan for approval by the Chairman of the Joint Chiefs of Staff that, among other things, would designate TRANSCOM as the department’s lead for improving in-transit asset visibility by synchronizing ongoing improvement initiatives and eliminating duplication and nonstandard practices among DOD components. The same memorandum indicated TRANSCOM should also prepare an implementation plan that, if approved, would require the military services to coordinate more closely with distribution partners on decisions regarding distribution. Specifically, the memorandum noted that the implementation plan would require the services to use the distribution process owner governance structure to coordinate decisions that impact distribution and deployment capabilities. In its 2010 Logistics Strategic Plan, DOD outlined a performance management framework to provide guidance and oversight of logistics improvement efforts, including supply chain improvement efforts. The plan states that the framework will be used to measure, track, and report progress in its improvement efforts. Our prior work has shown that in order for agencies to address high-risk challenges, they need to institute a program to monitor and validate the effectiveness and sustainability of corrective actions. DOD’s framework consists of a six-step process (see table 2) and offers a new management tool that may enable DOD to manage performance in supply chain management. For example, the framework refers to developing measures and targets that are tied to goals and initiatives, and it calls for an ongoing assessment and feedback process that could help to ensure that improvement efforts are effective and staying on track. Furthermore, the first step of the framework is consistent with the development of corrective action plans for high-risk areas, as discussed in the previous section of this report. In addition, the framework replicates the performance management framework described in the department’s overarching Strategic Management Plan for business operations. DOD senior officials within the Supply Chain Integration Office expect the confluence between the two plans to have a positive, behavior-shaping influence on DOD organizations. Although DOD outlined a performance management framework for logistics, it has not instituted this framework across the logistics enterprise. We did not find evidence during our review that DOD was using its logistics framework yet to guide and oversee improvement efforts. The department has not instituted the framework because key elements have not been fully defined and developed. Specifically, DOD has not (1) developed and issued implementation guidance; (2) carried out a strategy for communicating information about supply chain improvement efforts, performance, and progress; or (3) clearly defined the roles and responsibilities of senior-level logistics governance bodies and chief management officials. We have found some of these same weaknesses with DOD’s overarching performance management framework identified in the Strategic Management Plan for business operations. Until these elements are fully defined and developed, DOD may not be in a position to effectively use this new management tool to monitor and validate the effectiveness and sustainability of corrective actions. Other than the general outline of the performance management framework provided in the 2010 Logistics Strategic Plan, DOD has not developed and issued detailed implementation guidance to affected stakeholders. DOD and its components commonly issue directives, instructions, regulations, and other guidance to direct the implementation of new policies and programs. DOD officials from the Office of Supply Chain Integration stated that guidance on the performance management framework will be issued as necessary based on the results of initial assessments. However, no guidance has been issued to date, and procedures do not exist for implementing each of the six steps in the framework. For example, no guidance exists on the process by which stakeholders will reach consensus on setting performance targets, aligning efforts, and assessing and reporting results. The Logistics Strategic Plan states that strategic planning is a collaborative effort among the Office of the Secretary of Defense, DOD components, and other stakeholders. However, the plan does not provide detail describing how or when this collaboration will occur. Questions about how DOD intended to implement the Logistics Strategic Plan were raised during a July 2010 congressional hearing. In questions for the record submitted to DOD, a senior logistics official was asked to explain how the department intends to translate the general discussion in the Logistics Strategic Plan into specific guidance for the service and agency components. The official responded that the components’ strategic plans will align to departmentwide priorities, and top-level policy changes will cascade into component-level processes. However, the response did not discuss how or when that process will occur. Reporting on performance and progress is identified as a step within the performance management framework; however, DOD has not carried out a strategy for communicating its implementation plans and results of its supply chain improvement efforts. For example, DOD does not have a communications strategy in place to inform internal and external stakeholders of current efforts, progress made, remaining problems, and next steps needed for further progress. Our prior work has shown that a communication strategy that creates shared expectations and reports progress is important for results-oriented management and transformation. According to the 2010 Logistics Strategic Plan, DOD will develop a management report to document the department’s assessments of implementation of its general plan. However, a management report has not yet been issued, and it is unclear what types of information DOD intends to include in this management report or how information in the report will be used by decision makers as part of the performance management framework. In addition, DOD officials stated that the management report would be used informally among internal stakeholders and that they did not plan on sharing the performance report with external stakeholders such as Congress. DOD has not clearly defined the supply chain management improvement- related roles and responsibilities of senior-level logistics governance bodies, CMO, and Deputy CMO in the performance management framework for logistics. Our prior work on results-oriented management and organizational transformation cites the importance of establishing clearly defined roles and responsibilities, and we previously testified that it was unclear how the 2010 Logistics Strategic Plan will be used within the existing decision-making and governance structure for logistics to assist decision makers and influence resource decisions and priorities. The Logistics Strategic Plan calls for senior-level logistics governance bodies, including the Joint Logistics Board and Supply Chain Executive Steering Committee, to oversee implementation of improvements under the new performance management framework. However, the exact roles and responsibilities of these bodies are not defined in the plan. DOD issued a charter for the Joint Logistics Board in 2010 that broadly defines the roles and responsibilities of the board, and a draft charter exists for the Supply Chain Executive Steering Committee. However, neither charter specifically defines or describes the participation of those bodies in the performance management framework for logistics. For example, the charters do not clarify how the governance bodies will provide oversight of the key initiatives in the Logistics Strategic Plan. Moreover, it is not clear how the bodies will play a role in implementing individual steps in the framework such as setting targets and monitoring performance against those targets. Both bodies appear to provide oversight primarily by periodic briefings, as opposed to systematic monitoring of performance measures and improvement initiatives. For example, we found that the Joint Logistics Board provides some oversight of issues such as the development of a new joint supply support concept, ongoing and new joint logistics initiatives, and activities of joint groups and commands. The Supply Chain Executive Steering Committee maintains visibility over issues such as the development of performance metrics and some supply chain management initiatives. Although both bodies have met regularly, our review of records from these meetings indicate that neither body has exercised comprehensive and systematic oversight across all key improvement initiatives for supply chain management. Specifically, our review of the Joint Logistics Board’s 2010 meeting minutes showed that the board discussed and received status briefings on 4 of the 12 supply chain improvement initiatives identified as key priorities in the 2010 Logistics Strategic Plan. Similarly, the agendas of the Supply Chain Executive Steering Committee highlight that the committee received status briefing on 3 key supply chain improvement initiatives. The CMO and Deputy CMO are in a unique position to coordinate improvement efforts across various business operations, ensure that business-related plans are aligned, and monitor progress in implementing these plans, but their roles and responsibilities as they specifically relate to participation in the performance management framework for logistics have not been clearly defined. DOD officials have stated that logistics governance bodies are to oversee improvement efforts within the logistics enterprise, but DOD Directives provide that the CMO and Deputy CMO also have responsibilities related to the improvement of the efficiency and effectiveness of the department’s business operations. However, it is unclear what roles and responsibilities the CMO and Deputy CMO should have as part of the performance management framework for logistics in ensuring that key logistics or supply chain management initiatives that are deemed priorities for the department realize their intended effectiveness and efficiency improvements. We have previously reported that additional opportunities exist for the CMO, assisted by the Deputy CMO, to provide the leadership needed to achieve business-related goals, including supply chain management goals. For example, the Deputy CMO stated that she was not involved in developing or reviewing the Comprehensive Inventory Management Improvement Plan. Although she did review the Logistics Strategic Plan, this plan lacked clear performance measurement information and other detailed information, as noted earlier in this report. Moreover, successful resolution of weaknesses in supply chain management depends on improvements in some of DOD’s other business operations, such as business systems modernization and financial management. We have previously recommended that DOD more clearly define how the CMO, Deputy CMO, and the military departments will reach consensus on business priorities, coordinate review and approval of updates to plans, synchronize the development of plans with the budget process, monitor implementation of reform initiatives, and report on progress, on a periodic basis, towards achieving established goals. Federal government standards and best practices highlight the importance of tracking and demonstrating progress in programs and activities through the development and implementation of performance measures. Among other things, those standards indicate the importance of establishing and monitoring performance measures to improve program effectiveness and accountability for results. Incorporating outcome-based performance measures is also a best practice for effective strategic planning, and performance measures enable an agency to assess accomplishments, strike a balance among competing priorities, and make decisions to improve program performance, realign processes, and assign accountability. Further, our prior work has shown that in order to fully address high-risk challenges, agencies must be able to demonstrate progress achieved through corrective actions, which is possible through the reporting of performance measures. Characteristics of effective performance measures include having baseline or trend data for performance assessments, setting measurable targets for future performance, and establishing time frames for the achievement of goals. DOD logistics plans and policies also acknowledge an important role for performance measures. The 2010 Logistics Strategic Plan emphasizes performance management, and the need for performance measures is embedded in the performance management framework that is outlined in the plan. In addition, DOD’s supply chain regulation requires that components use metrics to evaluate the performance and cost of their supply chain operations; lay out requirements for those metrics; and direct that metrics address the enterprise, functional, and program or process level of supply chain operations. The regulation also directs DOD components to develop data collection capabilities that support supply chain metrics. With respect to DOD’s prior logistics strategic planning efforts that have covered supply chain management and other areas, such as the Logistics Roadmap, we have recommended that the Under Secretary of Defense for Acquisition, Technology and Logistics develop, implement, and monitor outcome-oriented performance measures to assess progress toward achieving the objectives and goals identified in these plans. We have also recommended that DOD develop and implement outcome- oriented performance measures that address each of the three focus areas for supply chain improvement. DOD agreed with these recommendations, but performance measurement has continued to challenge DOD’s supply chain management, as discussed below. DOD and its components track many aspects of supply chain performance, but DOD does not have performance measures that assess the overall effectiveness and efficiency of the supply chain across the enterprise. DOD components individually track aspects of their own operations using certain performance measures. For example, TRANSCOM uses logistics response time to measure the time that passes between submission of a requisition for an item and the delivery of the item to the supply support activity. DLA uses a perfect order fulfillment metric to measure how well the end-to-end supply chain delivers the right part to the customer on time, in the correct quantity, and with no material deficiencies. The department consistently tracks one enterprisewide supply chain metric, customer wait time. DOD logistics officials stated that as of December 2010, they increased the amount of performance information they regularly submit to the Deputy CMO for inclusion in the department’s performance budget. These measures include customer wait time by military service, perfect order fulfillment for DLA, and two measures related to inventory management. However, our prior work has found, and DOD has acknowledged, that additional measures are needed. In an effort to develop enterprisewide performance measures, DOD began an initiative in 2007 called the Joint Supply Chain Architecture to identify a hierarchy of performance measures to track overall effectiveness and efficiency of the supply chain and to identify areas for improvement based on industry standards. Led by the Deputy Assistant Secretary of Defense for Supply Chain Integration and the Director of the Joint Chiefs of Staff Logistics Directorate, the Joint Supply Chain Architecture effort is identified in the 2010 Logistics Strategic Plan as a key initiative intended to promote process standardization, facilitate process integration, and define the enterprise framework. The Joint Supply Chain Architecture is based on the Supply Chain Operations Reference model, a process model that is a long-established best practice for commercial supply chains and that provides a method to evaluate and improve supply chains. We found DOD has made progress with the initiative. The progress includes clarifying some common concepts across the various DOD supply chains and organizations. For instance, it details the types of performance information that will feed into higher-level measures and identifies three possible enterprisewide measures—customer wait time, perfect order fulfillment, and total supply chain management cost. The measures focus on speed, reliability, and efficiency of the supply chain, respectively. Two of these three measures, customer wait time and perfect order fulfillment, are not new and predate the Joint Supply Chain Architecture. DOD directed the implementation of the customer wait time metric as early as 2000 in a DOD instruction. Perfect order fulfillment is used by DLA, as noted above, but it is not used by any other DOD components or at the enterprisewide level. A total supply chain management cost metric is far from completion, and various officials stated that the meaningfulness of this measure is uncertain. Time lines for completion of a total supply chain management cost metric or an enterprisewide perfect order fulfillment metric have not been established. DOD officials stated that the current focus of the Joint Supply Chain Architecture effort was to identify and validate the many data sources from across the supply chain needed to support the development of enterprisewide metrics. In developing the Comprehensive Inventory Management Improvement Plan, DOD made considerable progress in identifying departmentwide performance measures, including measures within the requirement forecasting focus area, by using a collaborative process involving stakeholders representing key DOD components. As part of its plan, DOD established a metrics working group responsible for developing needed measures that do not exist and set time frames for their use. The plan identifies two to-be-developed metrics necessary to increase demand accuracy and reduce the percentage of over-forecasting bias. The two are to be developed by the end of fiscal year 2012. A similar collaborative process for defining performance measurement for asset visibility and materiel distribution has not yet occurred. For example, implementation of radio frequency identification technology has been identified as a priority for the department in various strategic planning documents. However, DOD has not established performance measures to assess the impact of its implementation, despite the significant initial investment of resources required to use the technology. When asked to detail the progress made in passive radio frequency identification implementation in a recent congressional hearing on supply chain management, the response of the Principal Deputy Assistant Secretary of Defense for Logistics and Materiel Readiness included two examples of improvements, such as a reduction in time to perform inventory at Tinker Air Force Base. However, DOD has not developed comprehensive, enterprisewide measures of implementation or results achieved. Data quality and a shared approach to performance measurement across organizations present challenges to DOD’s efforts to establish enterprisewide performance measures for all three focus areas of supply chain management. Ongoing efforts to modernize or replace DOD business information systems, including systems supporting supply chain management, are intended to improve data quality and data sharing within DOD components. However, we have found that data-quality problems persist, and these systems are not designed to routinely share data across organization boundaries, such as among military departments. Further, DOD’s information system modernization efforts have experienced significant delays and cost increases while projected benefits have not yet been achieved. Our recent review of DOD’s Comprehensive Inventory Management Improvement Plan revealed concerns about data reliability and availability that could affect the department’s efforts in implementing the plan, including delays in implementing business system modernization. The department is further challenged because it does not have a common approach to developing and implementing performance measures that include common definitions, data sources, and agreement regarding how to measure attributes of the supply chain across the enterprise. For example, we found that it could be difficult for the services and DLA to measure demand forecasting as they all differ in their current approach. These factors have likewise been a challenge for the Joint Supply Chain Architecture initiative. For example, one weapons system official explained that customer wait time may be ambiguous because it can be calculated differently and with different definitions. Until DOD overcomes such challenges and establishes enterprisewide performance measures for assessing supply chain performance in the three focus areas for improvement identified in our high-risk series, the department may have difficulty in demonstrating progress resulting from its corrective actions. DOD has demonstrated two key ingredients for making further improvements in supply chain management—namely, top leadership support and access to the necessary people and resources. Additionally, DOD through its new inventory management improvement plan has taken an important step toward improving requirements forecasting, one of the three focus areas where we have documented supply chain management weaknesses. Although implementation challenges remain to be addressed, the plan provides a path forward to improve DOD’s inventory management practices. The lack of corrective action plans for asset visibility and materiel distribution results in additional uncertainties regarding how promptly, effectively, and efficiently DOD will be able to address its systemic problems in supply chain management. The new performance management framework outlined in the 2010 Logistics Strategic Plan could be an effective management tool if it is instituted across the logistics enterprise. However, DOD has not taken action to provide implementation guidance, an effective communications strategy that provides transparency and accountability for improvement efforts, or well-defined and documented roles and responsibilities of key governance bodies and certain senior positions within its performance management framework for logistics. The participation of the Joint Logistics Board and the Supply Chain Executive Steering Committee in the framework and how these two bodies will provide effective oversight to all key initiatives for supply chain management is unclear. Further, the roles and responsibilities of the department’s CMO and Deputy CMO, as they relate to the performance management framework for logistics and existing logistics governance bodies, are similarly unclear. Moreover, the department has not defined how the CMO and Deputy CMO will ensure alignment of supply chain management improvement plans and performance management with plans and performance management of other defense business operations. Without these additional actions, DOD may not be able to fully implement the framework and use it effectively as a tool for managing performance. Performance information is critical for developing and implementing both effective corrective action plans and the performance management framework, and DOD has demonstrated an ability to plan for developing and enhancing performance measurement in the inventory management area. Developing meaningful, appropriate enterprisewide measures is a difficult task, especially for an organization the size and scope of DOD. Continued progress in defining needed performance measures for the requirement forecasting focus area, combined with the identification, development, and implementation of performance measures in the asset visibility and materiel distribution focus areas could highlight progress and needed management focus in order to address problems in those areas that span the supply chain enterprise. In the absence of effective performance measures, DOD cannot be assured that corrective actions are achieving intended results. Further, without these measures, it will be difficult for DOD to demonstrate progress to external stakeholders, such as Congress, and show that resources are invested efficiently. We recommend the Secretary of Defense take the following six actions to improve DOD’s supply chain management and address challenges in this high-risk area. To address remaining challenges in asset visibility and materiel distribution, we recommend that the Secretary of Defense direct the Under Secretary for Defense for Acquisition, Technology and Logistics to develop and implement corrective action plans for improving these focus areas. As these two areas are closely interrelated, DOD may wish to consider creating a single comprehensive, integrated plan that addressed both focus areas for improvement. The corrective action plan or plans should (1) identify the scope and root causes of capability gaps and other problems, effective solutions, and actions to be taken to implement the solutions; (2) include the characteristics of effective strategic planning, including a mission statement; goals and related strategies (for example, objectives and activities); performance measures and associated milestones, benchmarks, and targets for improvement; resources and investments required for implementation; key external factors that could affect the achievement of goals; and the involvement of all key stakeholders in a collaborative process to develop and implement the plan; and (3) document how the department will integrate these plans with its other decision-making processes; delineate organizational roles and responsibilities; and support departmentwide priorities identified in higher- level strategic guidance (such as the Strategic Management Plan and Logistics Strategic Plan). To institute the performance management framework for guiding and overseeing supply chain management and other logistics improvement efforts, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Acquisition, Technology and Logistics to take the following three actions:  Develop and issue detailed guidance to affected stakeholders involved in implementing the performance management framework for logistics.  Develop and implement a communications strategy for documenting and reporting on the results of supply chain management improvement efforts. The strategy should be linked with corrective actions plans, contain performance measurement information, and inform both internal and external stakeholders, including Congress.  Revise the existing charter of the Joint Logistics Board and the draft charter of the Supply Chain Executive Steering Committee to define and describe how the governance bodies will participate in the performance management framework for logistics. We also recommend that the Secretary of Defense clearly define the CMO’s and Deputy CMO’s roles and responsibilities as they specifically relate to (1) the performance management framework for logistics, including the establishment of corrective action plans and related performance measures; (2) existing governance bodies for logistics; and (3) the alignment of supply chain management improvement plans and performance management with those of DOD’s other business operations areas. We recommend that the Secretary of Defense direct the Under Secretary of Defense for Acquisition, Technology and Logistics to use a collaborative process, involving all key stakeholders, to identify, develop, and implement enterprisewide performance measures needed to demonstrate progress in the focus areas of asset visibility and materiel distribution. These measures should be incorporated into corrective action plans and the performance management framework. In written comments on a draft of this report, DOD stated that it concurred with the overall intent of the report and specifically concurred or partially concurred with two of our six recommendations. However, the department did not concur with four of our recommendations. The Principal Deputy Assistant Secretary of Defense for Logistics and Materiel Readiness stated that DOD did not concur with three of our recommendations based on its ongoing major initiatives and did not concur with one recommendation that the department stated was addressed in existing policy. DOD concurred with our recommendation to issue guidance to all affected stakeholders involved in implementing the new performance management framework for logistics that was outlined in the 2010 Logistics Strategic Plan. DOD stated that guidance will be provided to components and applicable defense agencies in the last quarter of fiscal year 2011. DOD did not elaborate regarding the nature and scope of information to be included in this guidance. DOD partially concurred with our recommendation to revise the charters of the Joint Logistics Board and the Supply Chain Executive Steering Committee to define and describe how these governance bodies will participate in the performance management framework for logistics. DOD stated that the performance management framework is not explicitly described in the charters, but that the charters reflect that these bodies are to provide oversight, coordination, and information-sharing for logistics initiatives and issues. DOD stated its view that no change is required for the Joint Logistics Board charter, but the draft charter for the Supply Chain Executive Steering Committee will be revised to address reviews that are of performance measures and initiatives designed to drive logistics improvements. We continue to believe that effective implementation of DOD’s new performance management framework for logistics will require departmentwide direction and oversight from its governance bodies to ensure initiatives are staying on track and that progress toward goals is being made consistently throughout the department. As we discussed in our report, we found that neither of the two governance bodies has exercised comprehensive and systematic oversight across all the key improvement initiatives for supply chain management that were outlined in the 2010 Logistics Strategic Plan. Since DOD does not intend to revise their charters in order to define and describe how these bodies will participate in the department’s new performance management framework, then it will be even more important that their roles and responsibilities be made clear and explicit in the implementation guidance that DOD says it plans to issue for the performance management framework. In disagreeing with our other four recommendations, DOD indicates that its ongoing involvement in major improvement initiatives, as well as existing policy, is sufficient for addressing supply chain management problems. We disagree based on the findings discussed in this report. Problems in supply chain management, including the three focus areas of requirements forecasting, asset visibility, and materiel distribution, are long-standing and complex. Identifying root causes and implementing effective solutions will require the involvement and coordination of multiple stakeholders across the department, as well as a strong effort to monitor, evaluate, and oversee improvements. Our recommendations are intended to promote a systemic, integrated, and enterprisewide approach to resolving problems in supply chain management. In addition, the recommendations are closely linked with criteria and steps that agencies need to take to successfully institute changes across an enterprise and to have an area removed from GAO’s high-risk list. As noted in our report, with the issuance of the Comprehensive Inventory Management Improvement Plan in 2010, DOD took important initial positive steps to address challenges in the requirements forecasting focus area, as well as other areas of inventory management. We believe that a similar approach could also be effective in addressing challenges in asset visibility and materiel distribution challenges. Our evaluation of DOD’s comments with regard to each of these four recommendations follows. DOD disagreed with our recommendation to develop and implement corrective action plans for the focus areas of asset visibility and materiel distribution. DOD stated that it did not agree with our recommendation because the department is already engaged in major efforts to improve asset visibility and materiel distribution. While DOD for many years has had improvement initiatives for certain challenges within these areas, we continue to believe that developing and implementing a corrective action plan for each of the remaining focus areas—or a single, integrated plan covering both areas—is critical to resolving supply chain management problems with a systemic, integrated, and enterprisewide approach. GAO’s criteria for removing the high-risk designation—for supply chain management and other programs—specifically calls for corrective actions plans that identify the root causes of problems, solutions to these problems, and steps to achieve these solutions. Moreover, an effective strategic planning process that results in a high-quality corrective action plan can provide clear direction to addressing DOD’s weaknesses in supply chain management. DOD commented that its involvement in major efforts to improve asset visibility and materiel distribution negates the need for a corrective action plan. DOD specifically refers to three efforts—(1) the Distribution Strategic Opportunities initiative, (2) the Distribution Network Optimization initiative, and (3) the Comprehensive Inventory Management Improvement Plan. DOD states that each of these efforts has specific goals, milestones, and targets, and involves key stakeholders. It is unclear why DOD, in its written comments, focuses on the first two efforts to the exclusion of other ongoing initiatives for improving distribution. During our review, DOD officials did not highlight these efforts as paramount, nor does the 2010 Logistics Strategic Plan characterize these as DOD’s most critical key initiatives. On the contrary, the 2010 Logistics Strategic Plan briefly describes the Distribution Strategic Opportunities initiative as an effort “to improve distribution across the enterprise” and includes it among several other initiatives the department has to improve supply chain processes. The Logistics Strategic Plan provides no other explanation of this initiative; provides no goals, milestones, or targets associated with the initiative; and does not show how this initiative will enable it to achieve high-level outcomes such as operating supply chains more effectively and efficiently. The plan, moreover, makes no specific mention of the second effort—the Distribution Network Optimization initiative—although information provided separately by the department indicates it is a sub-initiative under the Distribution Strategic Opportunities initiative. Furthermore, without a strategic planning process that examines root problems and capability gaps and results in a corrective action plan, it is unclear that these initiatives alone are sufficient for addressing all major challenges in the asset visibility and materiel distribution focus areas. For example, it is unclear to what extent these initiatives address challenges in managing supply support in a joint theater of operations. It is also unclear whether the initiatives are intended to focus on improving asset visibility. As mentioned above, DOD has demonstrated an ability to carry out a collaborative strategic planning process resulting in the issuance of its Comprehensive Inventory Management Improvement Plan. That plan identifies corrective actions that could, when implemented, effectively address the requirements forecasting focus area and other aspects of inventory management. We continue to believe that following a similar collaborative approach that results in a corrective action plan or plans for the focus areas of asset visibility and materiel distribution would result in significant progress in addressing remaining challenges in the supply chain management high-risk area. DOD did not concur with our recommendation to develop and implement a communications strategy for documenting and reporting on the results of supply chain management improvement efforts. DOD stated that an additional strategy of documenting and reporting its progress is not required because the department’s senior logistics leadership is continuously engaged in communicating its goals and performance to internal and external stakeholders via governing bodies, public forums, and formal reporting to Congress. Further, DOD stated that it will continue to use monthly in-progress reviews of supply chain management improvement efforts as the communications strategy with the components. We continue to believe that DOD needs to report on the results and progress of its logistics and supply chain management improvement efforts. Such reporting can enhance accountability, help ensure that all stakeholders are aware of progress being made and areas needing further attention, and convey consistent directions throughout the department for follow-on actions. Communication regarding goals and performance are key steps that DOD identifies as part of the performance management framework for logistics outlined in the Logistics Strategic Plan. Further, DOD stated in this plan its commitment to issue a DOD Logistics Strategic Management Report to document the results of the assessments performed as part of the performance management framework. Given DOD’s response to our recommendation, it is unclear how the department plans to implement these aspects of its performance management framework. As discussed in this report, a management report has not yet been issued, and it is unclear what types of information DOD intends to include in this report and how the information in the report will be used as part of the performance management framework. Further, DOD officials stated that the report would be used among internal stakeholders and they did not plan on sharing the report with Congress. DOD did not concur with our recommendation to clearly define the CMO’s and Deputy CMO’s roles and responsibilities as they specifically relate to (1) the performance management framework for logistics, (2) existing governance bodies for logistics, and (3) the alignment of supply chain management improvement efforts with those of DOD’s other business operations areas. DOD stated that this recommendation is not required because the Deputy CMO’s roles and responsibilities are sufficiently documented in DOD guidance. We stated in our report that the CMO and Deputy CMO have broad responsibilities related to the improvement of the efficiency and effectiveness of DOD’s business operations. However, we have previously reported that additional opportunities exist for the CMO and Deputy CMO to achieve business-related goals, including supply chain management goals. For example, we reported that the Deputy CMO was not involved in developing or reviewing the Comprehensive Inventory Management Improvement Plan. This plan was described in DOD’s comments on this draft report as one of the three major ongoing efforts to improve supply chain management. Further, neither the CMO nor Deputy CMO attends meetings of the Joint Logistics Board or Supply Chain Executive Steering Committee. Among the responsibilities of the Deputy CMO are to participate as a member of senior governance councils, and participation in senior logistics governance bodies therefore may provide more opportunities for closer collaboration and involvement of the Deputy CMO in addressing challenges in supply chain management, especially those challenges that span business areas. DOD additionally stated in its comments that it did not concur with our recommendation because the logistics enterprise reports to the Under Secretary of Defense for Acquisition, Technology and Logistics, who has oversight and management responsibility for logistics. Our recommendation does not imply that oversight and management responsibility for logistics should be shifted to the CMO and Deputy CMO; however, these individuals are in a unique position to coordinate improvement efforts across all defense business areas. Therefore, these individuals need to have a clearly defined role, consistent with their overarching responsibilities in each business area, including logistics and supply chain management. DOD did not concur with our recommendation to use a collaborative approach to identify, develop, and implement enterprisewide performance measures needed to demonstrate progress in the focus areas of asset visibility and materiel distribution. DOD stated that no additional actions are required because enterprisewide performance measures have been and continue to be developed using a collaborative process involving all stakeholders. Further, DOD stated that the performance management framework is a process rather than a document of performance management. The department’s comments noted that it is following this process in a collaborative fashion involving all stakeholders in the identification, development, and implementation of enterprisewide performance measures to demonstrate progress in key areas, including asset visibility and materiel distribution. As noted in our report, DOD used a collaborative process to define existing and needed performance measures as part of the development of its Comprehensive Inventory Management Improvement Plan. We continue to believe that DOD should follow a similar, collaborative approach for the focus areas of asset visibility and materiel distribution. Our work has shown that, at this time, enterprisewide measures for these focus areas do not yet exist. DOD began an initiative in 2007 called the Joint Supply Chain Architecture to identify a hierarchy of performance measures. However, the only enterprisewide performance measure used across the department, customer wait time, predates this Joint Supply Chain Architecture initiative. Other enterprisewide measures identified by the initiative are not fully developed and may be some time from full implementation. We agree with DOD that the performance management framework is a process and not a document. Our report does not suggest otherwise. We recommended that once key performance measures for these focus areas are defined and implemented, they be incorporated as part of the process for managing improvement efforts within the performance management framework. DOD’s comments are reprinted in their entirety in appendix II. The department also provided technical comments that we have incorporated into this report where applicable. We are sending copies of this report to the appropriate congressional committees. We are also sending copies to the Secretary of Defense; the Deputy Secretary of Defense; and the Under Secretary of Defense for Acquisition, Technology and Logistics. This report will also be available at no charge on our Web site at http://www.gao.gov. Should you or your staff have any questions concerning this report, please contact me at (202) 512-8246 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Staff who are major contributors to this report are listed in appendix III. To determine the extent to which the Department of Defense (DOD) has developed and implemented detailed corrective action plans that address high-risk challenges in the three focus areas we identified for improvement, we identified existing plans for logistics, supply chain management, and the three focus areas: requirements forecasting, asset visibility, and materiel distribution. We assessed the extent to which such plans provide a comprehensive, integrated strategy for improving one or more of the focus areas and include the key elements of a corrective action plan that we have previously identified. Specifically, we evaluated DOD’s October 2010 Comprehensive Inventory Management Improvement Plan and determined its applicability as a corrective action plan for inventory management and the requirements forecasting focus area by comparing the plan and its elements to criteria from our prior reports on corrective action plans and effective strategic planning. Specific criteria on corrective action plans and the elements of effective strategic planning are discussed in the report. Using these same criteria, we evaluated the 2010 Logistics Strategic Plan and determined the extent to which it could serve as a corrective action plan for the areas of asset visibility and materiel distribution. We met with officials from the Office of the Deputy Assistant Secretary of Defense for Supply Chain Integration to discuss features of the 2010 Logistics Strategic Plan and any ongoing and possible future strategic planning efforts. We reviewed DOD’s testimony before Congress and written responses to questions for the record on the plan. We also reviewed prior GAO reports and testimonies pertaining to DOD supply chain management, including prior strategic planning efforts. To assess the extent to which DOD has an effective program for monitoring and validating the effectiveness and sustainability of corrective actions, we reviewed the performance management framework identified in DOD’s 2010 Logistics Strategic Plan. We reviewed the features of the framework that are aimed at helping DOD to guide and oversee improvement efforts, and we also determined the implementation status of the framework. In addition, we assessed the extent that DOD has included elements needed for instituting the framework across the department. We based this assessment, in part, on a body of work that sets forth criteria for results-oriented management and best practices for organizations that are transforming their management practices and structures. We also reviewed DOD’s 2009 Strategic Management Plan and our recently released report on the plan. We compared the performance management frameworks of the two DOD plans to determine the degree of congruence between the frameworks. We met with officials from the Office for Supply Chain Integration to discuss the performance management framework and oversight structure (including senior-level logistics governance bodies) and obtain additional insight and supporting documentation (e.g., agendas and meeting minutes of these bodies) on the purpose and implementation of the framework. We reviewed DOD’s recent congressional testimony on the 2010 Logistics Strategic Plan and written responses to related questions for the record to determine DOD’s approach and perspective on implementing the plan, including the performance management framework. In addition, we reviewed legislation, DOD policies, and other documentation regarding the chief management officials, including the DOD Chief Management Officer, the Deputy Chief Management Officer, and military departments’ Chief Management Officers; and our prior work on performance management. To determine the extent to which DOD has an ability to demonstrate supply chain management progress, we reviewed how DOD uses, or plans to use, performance measures discussed in the 2010 Logistics Strategic Plan. We also reviewed DOD’s existing or planned performance measures for the three focus areas of improvement, including measures discussed in the Comprehensive Inventory Management Improvement Plan. As a basis for evaluating these measures, we reviewed DOD’s supply chain management regulation, federal standards and best practices, and our prior findings and recommendations on this issue. We discussed existing and planned measures with officials from the Office for Supply Chain Integration and other DOD components. We also obtained information from these officials on the development of the Joint Supply Chain Architecture since a major effort of the initiative is to define enterprisewide performance measures to track efficiency, effectiveness, and reliability of the supply chains. We met with officials from the following weapons systems program offices involved in implementing Joint Supply Chain Architecture case study programs: Integrated Materiel Management Center, U.S. Army Aviation and Missile Life Cycle Management Command, Redstone Arsenal, Alabama; PEO Integrated Warfare Systems, Washington Navy Yard, Washington, D.C.; Warner Robins Air Logistics Center, Robins Air Force Base, Georgia; Naval Inventory Control Point, U.S. Naval Supply Systems Command, Mechanicsburg and Philadelphia, Pennsylvania. We discussed performance measures used by these case study programs as well as DOD efforts to develop enterprisewide performance measures. We conducted a site visit to the U.S. Transportation Command, Scott Air Force Base, Illinois, to obtain information and perspectives on distribution-related initiatives to discuss supply chain improvement efforts and performance management. In addition, we contacted officials from the following agencies and offices to obtain information and perspectives on the 2010 Logistics Strategic Plan, supply chain improvement initiatives and efforts, and their use of performance measures:  U.S. Joint Forces Command: Operations, Plans, Logistics and Engineering Directorate, J4 Division, Norfolk, Virginia;  Defense Logistics Agency, Ft. Belvoir, Virginia;  U.S. Army: Office of the Deputy Chief of Staff of the Army, G-4 Logistics, Pentagon, Washington, D.C.;  U.S. Navy: Deputy Assistant Secretary of the Navy, Acquisition and Logistics Management, Logistics Division, Pentagon, Washington, D.C.; Chief of Naval Operations Supply, Ordnance and Logistics Operations, Arlington, Virginia;  U.S. Air Force: Air Staff, Logistics Directorate, Rosslyn, Virginia; and the Global Logistics Support Center, Scott Air Force Base, Illinois. We also met with representatives from the Institute for Supply Management and the Center for Advanced Purchasing Studies Research, Tempe, Arizona; and the University of Alabama Office for Enterprise Innovation and Sustainability, Huntsville, Alabama, to obtain industry and academia’s views and documentation on the Supply Chain Operation Reference model, industry standards for supply chain management, and how application of those standards are unique to DOD. We conducted this performance audit from February 2010 to July 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evident to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our finding and conclusions based on our audit objectives. In addition to the contact named above, Thomas Gosling, Assistant Director; Jeffrey Heit; Suzanne Perkins; and Pauline Reaves made key contributions to this report. High-Risk Series: An Update. GAO-11-278. Washington, D.C.: February 2011. Defense Business Transformation: DOD Needs to Take Additional Actions to Further Define Key Management Roles, Develop Measurable Goals, and Align Planning Efforts. GAO-11-181R. Washington, D.C.: January 26, 2011. DOD’s 2010 Comprehensive Inventory Management Improvement Plan Addressed Statutory Requirements, But Faces Implementation Challenges. GAO-11-240R. Washington, D.C.: January 7, 2011. Defense Logistics: Additional Oversight and Reporting for the Army Logistics Modernization Program Are Needed. GAO-11-139. Washington, D.C. November 18, 2010. DOD’s High-Risk Areas: Observations on DOD’s Progress and Challenges in Strategic Planning for Supply Chain Management. GAO-10-929T. Washington, D.C.: July 27, 2010. Warfighter Support: Preliminary Observations on DOD’s Progress and Challenges in Distributing Supplies and Equipment to Afghanistan. GAO-10-842T. Washington, D.C.: June 25, 2010. Defense Inventory: Defense Logistics Agency Needs to Expand on Efforts to More Effectively Manage Spare Parts. GAO-10-469. Washington, D.C.: May 11, 2010. Defense Logistics: Lack of Key Information May Impede DOD’s Ability to Improve Supply Chain Management. GAO-09-150. Washington, D.C.: January 12, 2009.
DOD estimated that overall spending on logistics, including supply chain management, was more than $210 billion in fiscal year 2010. Because of long-standing weaknesses in supply chain management, GAO has designated DOD supply chain management as a high-risk area and identified three focus areas for improvement--requirements forecasting, asset visibility, and materiel distribution. GAO reviewed the extent to which DOD has developed and implemented (1) corrective action plans that address challenges in the three focus areas, (2) an effective program for monitoring and validating the effectiveness and sustainability of supply chain management corrective actions, and (3) an ability to demonstrate supply chain management progress. GAO prepared this report to assist Congress in its oversight of DOD's supply chain management. GAO reviewed strategic and improvement plans, reviewed documents detailing the performance management framework, and assessed performance measures. DOD has developed and begun to implement a corrective action plan for requirements forecasting, one of the three focus areas GAO identified as needing improvement in supply chain management. However, it does not have similar plans for the focus areas of asset visibility or materiel distribution. Such corrective action plans are critical to resolving weaknesses in these two areas. Such plans should (1) define root causes of problems, (2) identify effective solutions, and (3) provide for substantially completing corrective measures in the near-term, including steps necessary to implement solutions. DOD's Comprehensive Inventory Management Improvement Plan, issued in October 2010 in response to a statutory mandate, includes the elements necessary to serve as a corrective action plan for requirements forecasting. DOD's 2010 Logistics Strategic Plan, and other prior logistics-related plans, do not contain all of the elements needed to serve as corrective action plans for either asset visibility or materiel distribution, such as definition of problems or performance information to gauge progress in achieving outcomes. DOD outlined a performance management framework that is designed to provide guidance and oversight of logistics efforts, including supply chain improvement efforts. GAO's prior work has shown that in order for agencies to address challenges, they need to institute a program to monitor and validate the effectiveness and sustainability of corrective actions. The framework, as outlined in the 2010 Logistics Strategic Plan, offers a new management tool that may enable DOD to manage performance in supply chain management. For example, it calls for an ongoing assessment and feedback process that could help to ensure that improvement efforts are effective. However, DOD has not included key elements for instituting its performance management framework, such as implementing guidance to affected stakeholders, a strategy to communicate results internally and to stakeholders such as Congress, or definition of the roles and responsibilities of senior logistics governance bodies and chief management officers. Until the framework is fully instituted, DOD may not be able to effectively use this new management tool to monitor the effectiveness of corrective actions. DOD and its components track many aspects of the supply chain; however, DOD does not have performance measures that assess the overall effectiveness and efficiency of the supply chain across the enterprise. In order to fully address challenges, agencies must be able to demonstrate progress achieved through corrective actions, which is possible through the reporting of performance measures. In the development of its inventory management improvement plan, a collaborative process was used to define existing and needed performance measures for requirements forecasting. A similar collaborative focus on developing enterprisewide performance measures for asset visibility and materiel distribution has not occurred. The department may have difficulty demonstrating progress until enterprisewide performance measures are developed and implemented in all three focus areas for improving its supply chain management. GAO recommends that DOD develop and implement corrective action plans and performance measures for asset visibility and materiel distribution and take steps to fully institute its performance management framework. DOD concurred or partially concurred with two recommendations and did not concur with four, citing ongoing initiatives and existing policy. GAO believes all recommendations remain valid, as further discussed in the report.